Create a gist now

Instantly share code, notes, and snippets.

AMD Sucks

Forget AMD and that's straight from the source. Sorry for the long build-up on the history, but if I'm to convince you to forget this non-technology, I think it's best you know where it came from. For those in a hurry, the executive summary is in the subject line. ;)

In Spring of 2009, I rewrote the Dojo loader during a requested renovation of that project. The primary pattern used to make it more practical was:

dojo.provide('foo', ['bar1', 'bar2'], function() {

[module code]


The name of the module is "foo" and it depends on the two "bar" modules to work.

There was also a handy (and 100% optional) "cattle prod" function that informed developers when they had their require calls out of order (by throwing an exception immediately). As I see it, allowing require calls in any random order is neither practical nor useful (and Dojo never did that anyway).

dojo.required('foo'); // optional (put at top of module)

There was no change to the existing dojo.require syntax.

It should be noted that the extra "module pattern" (or "boilerplating" as Burke kept calling it) was used only in those modules that had immediate dependencies (i.e. not those that could wait until load).

I rewrote the entirety of Dojo/Dijit/DojoX using this pattern (and I still have the code if anyone is interested). Roughly 90% of the Dojo file structure was preserved. The remaining 10% (which was a colossal mess of intertwined dependencies, primarily in Dijit) required breaking up a few files.

I fiddled with the Dojo branch all summer, replacing all of the browser sniffs, rewriting the loader, etc. By mid-July, all of the demos worked and unit tests passed. So the announcement was made in the Dojo developer mailing list for people to come have a look.

Along comes this James Burke guy:

For starters, he homed right in on the one aspect that was irrelevant (my fiddling around with the Dijit file structure).

"The number of files has now effectively doubled."

...Of course, that was not the case. Perhaps he was building up to something?

Well, he wants to load modules with async XHR (God only knows why). My rendition used document.write during parse and script injection thereafter. Fast, concise, etc. Why mess with success?

"I would prefer to just change the loader to just use the xd loader by default and allow it to load local modules via async XHR."

NIH, right? Anyway, I allayed his concerns on the "doubled" files:

"Don't have time to get into the discussions at the moment, but the split files are not permanent. I used that technique to get some of the Dojo/Dijit stuff up and running so I could test the demos. There are always multiple options in rearranging this stuff. I just added some logic to the new - required - function to deal with modules that insist on treating require as if it worked synchronously (best avoided during design with addOnLoad IMO.) So I'll be able to sew them all up shortly."

...and, as we shall see, I did just that later in that month in preparation for a merge with the trunk.


"I always thought it was necessary to get the source before executing it and then work out and load the dependencies first. The sync XHR was a way around that problem since it basically blocked execution of the rest of the file while dependencies were loaded. If sync is not used, then the only way I saw of getting the module code without executing it immediately was via an XHR call.

The only other option I saw was to force module writers to wrap their code in an object/function wrapper, but that feels awkward to me..." (emphasis mine)

Aha! So he didn't like the idea at that point. But he didn't get it either; I wasn't forcing module writers to wrap their code in anything. The use of the above callback pattern was limited to modules with immediate dependencies (i.e. not dependencies that sat unused until load).

September 2009:

Burke said:

  • The change in loader behavior. Using doc.write + script tags means that modules that have direct dependencies on another module need to be re-written into two files. This will break code in the field.

And I replied:

"There are no more split files (as of a few weeks ago). There is a new method though, so some apps would have to be updated. Seems like more of an internal problem though (and almost all of those are fixed internally). Most external apps shouldn't need to do anything differently."

So he was harping on an issue that didn't exist at that point.

The new method was called "required" and threw an exception immediately if the rules for nested dependencies were not followed. It was quite useful in untangling the Byzantine mess of Dijit dependencies. The way I saw it, if the new system would work for that stuff, it would work for anything in the real world.

"And in the larger scope, I explicitly avoided forcing module creators to create a function wrapper around their code -- it is more boilerplate." (emphasis mine)

So did I, but he wasn't paying attention (at least not at this point).

"I am still not a fan of requiring a user-coded function wrapper around the module for it to load correctly. I would rather see the normal Dojo loader use async xhrs to get speedups in initial loading..." (emphasis mine)

Just out of it. :( And the assertion that you could speed up script loading via XHR (as opposed to document.write) is obviously false.

"Some examples of the extra boilerplate in the featureDetection branch:"

They've since deleted that entire branch (on my order), but I still have all of files here.

Another guy chimed in:

"I admit I haven't spent a whole lot of time looking at the feature detection branch, but my gut impression from what I've seen so far is that the "new loader" is a premature optimization"

Premature optimization === something I haven't looked at, tested or even fully understood. These guys were driving me nuts by this point. At this point, their collective confusion was causing the discussions to spin into unreality.


"I was excited about this feature branch change, as I often work on
apps that, during development, take forever to load. However, I don't
see that new boiler plate as realistic. As it stands it's repetitive
and more verbose, causing the need to basically 'require' twice."

Utter nonsense of course. None of the Dojo modules 'required' anything twice. I'm not sure what code he was looking at.

Moving further off in the weeds:

"However, if that could be fixed so that the required([X, X, X] ..code)
could replace the "require, require require, code", then we actually
save a lot of bytes."

One man's nonsense is another man's "proof":

"Unfortunately I was skeptical about David's ability to pull off the
new loader because I didn't think he would be aware of all the use
cases for the loader/builder that have come up over the years. You
seem to be proving that out ( a side question - are there unit tests
for these cases?)."

It passed every Dojo/Dijit/DojoX unit test and all of the demos worked by this time. Apparently, they just didn't want to take the time to look at it.

"What I would like is to have my cake and eat it too - I'd like my
complex grid application that loads a billion dojo files locally to
use the doc.write style loader, while my custom files use the current
method. Is that possible without the new boiler plate? Because that
looks to be too big of a change, even within the library." (emphasis mine)

It was not a "doc.write style loader", but a whole new pattern that wrapped modules in callback functions (just like "AMD").

"Not leaving out XD, is it possible to use doc.write, and after the
initial load it switches t the XHR load? But again.... is that new
required call really necessary?"

Yes it worked XD (which was one of the main points) and yes you could still use XHR (or preferable script injection) after load. And no, the additional "required" function was not strictly necessary but quite useful to ensure that the developers followed the rules for nesting Dojo dependencies.

"I think I have to agree with Rawld that this may not be a problem that needs fixing."

Dojo's loader? Not a problem? :)

Here is Burke replying to my correction of the previous assertion that modules are now "required twice". :)

No big changes. The require/provide stuff does what it always did. There are optional required/provided methods.

"To clarify: in order to avoid having people understand the intricacies of code loading, the general advice will be to always use the required callback wrapper.

While it is technically true that you might be able to construct a module that does not need to use the required callback wrapper, in practice this will be hard to explain to the developer, and to avoid extra support costs of debugging those kinds of issues, the general advice will be to always use the callback wrapper to define the module code.

So, to me the issue is still down to how people feel about requiring modules to switch to wrapping the module code in a callback wrapper."

From the "RequireJS" documentation:

// Start the main app logic.
requirejs(['jquery', 'canvas', 'app/sub'],
function   ($,        canvas,   sub) {
    //jQuery, canvas and the app/sub module are all
    //loaded and can be used here now.

...His only "innovation" was to eliminate the "boilerplating" in the modules themselves, which required the use of dodgy techniques (like load listeners on SCRIPT elements) to determine when scripts had finished loading. IIRC, it injected scripts into the HTML element as well (as opposed to using document.write). I have no idea how it "works" today, but the idea is still the same.

So yeah, he copied it. Later in the threads (and on his "RequireJS" site), he tried to hide that fact by constantly referring to the loader in my branch as a "document.write" optimization.

These sorts of "discussions" went on into the Fall of 2009. Here's a rare sane response from the Dojo camp:

That guy had actually played around with the new branch and therefore knew it worked faster, cross-domain and was very simple to understand and implement. Imagine, actually trying something before asking a million off-the-mark questions. :)

Burke again:

"So, while I do not like the actual boilerplate in the existing featureDetection branch, I believe a boilerplate that is more like the XD would be somewhat more tolerable (just a possible future syntax example):

  require: [
  module: function(){
    //module code goes in here

So he is now suggesting changing the dojo.require syntax, which made no sense in this context (the whole idea was to preserve compatibility with old Dojo versions as much as possible).

"This could be simplified to dojo.load("", "", function(){}),"

That's basically what I had in the first place. It's what he kept referring to as "boilerplating". He wanted to turn it on its ear to "save extra bytes", "avoid confusion" (or whatever), but it wasn't practical to do so. Practicality trumps aesthetics (plus a few bytes) every time, but not in the Bizzaro world of Dojo. He didn't like the very practical and uber-compatible use of document.write either (perhaps JSLint told him it was bad). :)

On and on...

"The benefit is first time usability: the user can actually get intelligible errors and use the debugger in any browser out of the box. It helps our XD vs. normal build discussion, since an XD build would no longer be needed."

He's starting to get it, but...

"I would use the the basics of the existing xd loader as the main loader, with the document.write() before-page-load optimization."

There it is again. Why does he insist on referring to this new pattern as a "document.write optimization" when the fact that it uses document.write under the hood is irrelevant? Should be able to guess at this point. ;)

In short, he apparently worked on the original (and execrable) XD loader and couldn't stand to see it on the cutting room floor (NIH syndrome). But he was also setting up to announce "his" big new idea.

Didn't take long:

"Using David Mark's document.write() optimization before page loads, and given the feedback so far with people being OK with a function wrapper around a module, to allow that kind of script loading, I propose the following syntax for modules in a Dojo 2.0 system."

Brilliant! But it wasn't his idea at all (though the way he worded it sounds like my role was to simply "optimize" the existing Dojo loader).

"It is based on some previous discussions in another thread. Another motivation: avoiding global names."

Avoiding global names? What's that got to do with the price of beans in... JFTR, nothing I did added any additional global variables to Dojo. I think he was just trying to obfuscate the issue.

Then Alex Russell (who worked on the original loader) chimes in at the end:

"I like where this is going! A terser way to specify multiple requires
and tying in the deferred which gets triggered as a result is nice.

Missed by a mile. You can't use dojo.deferred in the loader. It goes without saying that you can't use any Dojo module in its loader.

There then followed an interminable discussion about why that was a bad idea. And finally, the inevitable plug for a new script loader library by James Burke:

"I know this is a bit off-topic, we need to get Dojo 1.4 wrapped up, but I went ahead and made a module loader that was informed by the conversation on this thread and others in dojo-contributors.


...which was renamed to "RequireJS" a couple of months later:

Meanwhile, Dojo was left with their original sluggish, complicated and buggy loader that used synchronous XHR to load scripts (among several other failings) and Burke went off to try to get jQuery to use his new loader. Today he seems intent on creating some sort of cottage industry out of an idea that was only ever meant for use (and only made sense) within the confines of Dojo. And he didn't even implement it well.

"In the one hand, we have a reliable, browser-supported, time-tested approach, that performs demonstrably better on real-world pages; and in the other hand, we have a hacked-together "technology" that in the past has actually broken every site that used it after a browser update, that performs demonstrably worse on average, and with a far greater variance of slowness."

Ain't that the truth? :)

'Taken from the horses mouth... "But here is the plain truth: the perceived extra typing and a level of indent to use AMD does not matter. Here is where your time goes when coding; Thinking about the problem; Reading code;" ...yes, this is actually on the page. Let me explain not only how absurd but utterly idiotic this statement is; On one level you have extra typing, most of this "extra" typing comes from RequireJS wrapping all of your code into it's modules which is copy-pasta but not for everything, and on those not for everything codebases you now have more problems to think about, and more code to read than before.'

Well said. And interesting that he mentioned "wrapping" modules. That sounds suspiciously like the Dojo "boilerplating" that Burke kept harping on; I guess extra typing is okay if you are using his library. :)

So maybe he copied even more closely than I thought or perhaps the thing has "evolved" to be exactly what I had done for Dojo in the first place. I have no idea as I stopped reading his code a long time ago. I suggest that aspiring JS developers do the same unless they want to learn from the "show me where it fails" school of browser scripting. ;)

DanMan commented Jul 20, 2012

This is just confirms my bad gut-feeling about this. I'm currently trying to deploy a not all that big dojo project on our customer's online server and what works locally just fine doesn't when put online.

I've completely lost faith in dojo with the new direction they're taking. I don't need/want this AMD stuff in the first place, and now they shove it down my throat since the old require() stuff is apparently broken for all i know. It's a script language people. Why would you want to compile it?


The reason it doesn't "work" (quotes indicate that success is only by coincidence for that and similar projects) online is because of their ridiculous build process. Last thing you should want to do is put a bunch of complicated JS together, test it to death and then run it through blender on deployment. You should build your JS assets at design time.

The JS code inside Dojo is so atrocious, in every conceivable way, that the authors can only be characterized as inept (that's a paraphrase of Richard Cornford, who is quite accomplished with browser scripting). And yeah, I've seen it too. :(

As for the AMD stuff, it's sort of the final punchline/death knell for that project. And, exactly, it's a high-level scripting language; how "compilers" came into vogue is anybody's guess (marketing?)

My advice is to start removing bits of a Dojo a little at a time until it is all gone. Or, if you are in a pinch, drop me a private email as I do this sort of consulting (e.g. making Dojo behave and ultimately vanish).

Take care and good luck!

cref commented Jun 11, 2013

amen to that.

david-mark commented Oct 30, 2013 edited

This is redacted (e.g. with "blahs"), but actual code from "compiled" Dojo:

define("blah", ["dijit","dojo","dojox"], function(dijit,dojo,dojox) { dojo.provide("blah"); ... })

So James Burke was copying to the letter. Doesn't get more carbon than that. The little jackass must have thought I wouldn't find out (despite seeing him doing it at the time). :)

Yeah, he changed "provide" to "define", all the while blithering about imagined "document.write optimizations". Disingenuous and stupid. :( Maybe he thought that if he rewrote enough of the code it would count as a new invention. Certainly he didn't use my code as his BS breaks. ;)

What does this mean for the man on the street? Avoid this guy like the plague. ;) Dojo too.


I used dojo in the past; It was so simple syntactically kind of like object oriented, i hated jquery for basically the syntax. Now, with the introduction of AMD crap the syntax has become recondite and their reason to do this does not go well with me and what's with not supporting global namespaces, If a developer uses it it's his problem to manage the namespace clash, why force him to use something.


Forget Dojo and forget RequireJS (unless masochistic).

david-mark commented May 10, 2014 edited

Translating the RequireJS "history" at:

"In 2009, David Mark suggested that Dojo use document.write() to load modules before the page loads to help with that issue"

More like this: In 2009, David Mark rewrote the stock Dojo loader and jettisoned the failed "x-domain loader" by James Burke (and whomever else on the Dojo team). That's never been in dispute. The original attempt at x-domain script loading looks nothing like AMD, whereas my loader looks exactly like it (from the outside anyway). It's the same design as what you now see in the AMD-enabled Dojo versions.

Also in late 2009, this failed x-domain loader developer James Burke came around the effort to test the new Dojo loader and was really curious about how I solved the problems he never could. No dispute there either. What the hell else was he doing there? He'd been absent the previous six months or so while the new branch was being developed. Not a peep out of him until demonstrations started.

Then within a week or so, James Burke has magically solved all his old problems using my new design. That can hardly be in dispute either (see above).

He described that as:

"I fleshed out some of the details for RequireJS (then called RunJS) on the dojo-contributors list"

No he didn't. He injected himself into the conversations regarding my loader design (see above). Then he ran off to try to market it. None of this was his inspiration or design. He just showed up at the end of the project, walked off with the inspiration and rewrote the code behind it in his own incompetent manner (see above).

"...and Mike Wilson originally pushed for a more generic loader that could load plain files as well as allow for different contexts."

That's the biggest joke of all as it is made in the same breath as the description of my input. I vaguely remember some guy called Mike Wilson around the Dojo discussions. He had nothing to do with any of this. Regardless, the suggestion was ill-advised as text files, style sheets, etc. can't call back. That's what was screwing up the old pre-AMD Dojo loader as it had to load plain text files for localization. Obviously a really lousy design for localization, but that's another issue. Mike Wilson was trying to inject Dojo's old mistake into this new loader.

Also, I've been asked about the "A" for asynchronous in "AMD" and how that could fit with a "document.write optimization". It couldn't (which is why he kept harping on document.write). As mentioned above, it used that only during the parse. On load, it switched to script injection (copied that bit from My Library). Put require calls in the HEAD and block content. Put at end of the BODY, do not block content. Put in a "ready" callback (or load listener) and load asynchronously (for what that's worth, which is virtually nothing). For Dojo, there was also a switch for it to run in all-synchronous mode (for compatibility with the old versions). Had all the bases covered, proven by the fact that all of the unit tests still ran (in either mode).

Dojo always ran synchronous XHR (which locks up browsers) for script loading, unless employing the notoriously flaky and over-complicated "x-domain loader" by James Burke. That code went on and on and, more importantly, failed sporadically, leaving an endless spinner on Dojo demos. Nobody could possibly dispute that, at least nobody who was around Dojo at the time.


Addendum for those unfamiliar with the old Dojo loader (predating my branch and AMD). It looked like this:




It's clearly a synchronous design. Behind the scenes, it used synchronous XHR. As mentioned, this is a bad idea as it locks up the browser until the request is complete (or times out). It's also limited by the same-origin policy (i.e. not cross-domain). And, in Dojo's case, using both eval and window.eval (in the same line no less). :(

The "x-domain loader" referenced above was Burke's attempt at an optional cross-domain loader implementing the above API without callbacks. Was very long-winded and infamous for endless loading indicators, exceptions, etc. It became redundant (and incompatible) once the stock loader was switched to document.write and script injection with callbacks. As mentioned in above-linked discussions, it was not part of the future of Dojo at that point. Until he jumped in and messed up the whole thing for years. That was late October 2009.

Now, I don't think he became a one-week wonder and came up with the same (or similar at the time) patterns and, having seen the answers (with unit test results), wrote everything himself from scratch, etc. just in time for November 2009. But that's what he claimed at the time (and apparently still does). :)

I say similar at the time as there were a number of slightly different patterns discussed at the time (see above). And I know whatever callback scheme he tried first failed (again see above). I forgot about it for years and then saw this in some recent "compiled Dojo" code.

define("blah", ["dijit","dojo","dojox"], function(dijit,dojo,dojox) { dojo.provide("blah"); ... })

Wasn't any reason for Dojo to wait years for that as it had it in October 2009 (with compatibility mode). This AMD stuff must have come full circle. Hope nobody is planning on patenting it. :)

Also odd that something like that would be in the "compiled" code. My branch used something like this to remove unneeded "boilerplating" from the ultimately concatenated script. I mean, isn't that the point?

// begin no-compile
dojo.provide('foo', ['bar1', 'bar2'], function() {
// end no-compile


// begin no-compile
// end-nocompile

Not all of them, of course, but I found that many of the internals were limited to calling dojo.declare, dojo.mixin, etc. It's been a while and don't remember all of the details on that, but it follows that if you concatenate the "modules", there is no reason to have require calls do anything (should just be stripped out during the "compile" process) and so provide calls become redundant as well, providing nothing but a variation on the old "module pattern". Never said it was rocket science, just my design for Dojo's silly loader.

Regardless, as mentioned repeatedly, this whole Dojo (and Google Closure and the like) build process is backwards. You build first, not last. You write and test against the same code that you release (be it to QA or the unsuspecting public). You don't run it through a "compiler" as a last step before release. Also, no matter how simple the loader (and didn't get much simpler than mine), you don't want to introduce any variables related to script loading because your audience will not be using script loading (at least not in any sort of reasonable "compiler" scheme).

Doesn't matter if "endorsed" by IBM or Google. They are using these things as giant public Betas for whatever it is that they want to try to do down the road. The "compiler" strategy is undeniably idiotic and wasteful. And the idea that huge companies like IBM and Google could never endorse idiotic ideas (or have ulterior motives) is not an argument. ;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment