secret
Last active

Again with the modules

  • Download Gist
gistfile1.md
Markdown

Doing this as a gist, because I don't have time for a polished post, I apologize:

If you had to define an optimal module system that had to work with async, networked file IO (the browser) what would that look like? It would not be node's system, for the following reasons:

1) the require('') in node is very imperative. It can be called at any time and mean a module should be synchronously loaded and evaluated when it is encountered. How would this work if this call is encountered in the browser case: require(someVariable + '/other')? How would you know what module to include in any bundle? For those cases, you should allow for an async require to fetch, and leave dependencies that can be bundled to the require('StringLiteral') format. Node has no allowance for this, and browserify will not do a good job with those types of dependencies. It will need hints, or explicit out-of-module listing of what to include.

Once you have an MVC system in the browser, it will be common to delay loading of a view and its controller until it is needed, and rely on routing via URLs or button actions to know what module to imperatively ask for next. This works well in AMD since it has a callback-style require for this. This really helps performance: the fastest JS is the JS you do not load. This is a very common practice in use with AMD systems. browserify cannot support it natively. The suggestion when using browserify is to choose your own browser script loader and figure out the loading yourself.

2) Similarly, since there is no synchronous module fetching and execution in the browser (at least it would be madness to want to do that), you cannot reliably preload a modifier to the module system so that it takes effect for the rest of the module loading. Example in node is requiring coffeescript in your top level app so that any .cs files are considered in module loading.

Instead, AMD has loader plugins that allow you to specify the type of resource that is being requested. This is much more robust, and leads to better, explicit statements of dependencies. Plus, loader plugins can participate in builds. This is better than the browserify plugins for this type of thing because of the imperative nature of JS code: fs.readFile(someVariable + 'something.txt').

Static declaration of dependencies is better for builds. So if you like builds, favor a system that enforces static declaration, and has an async form for non-declarative module uses. ES modules will go this route, and explicitly not use node's module system because of this imperative dependency issue that node has.

The parts that are ugly about AMD:

1) It explicitly asks for a function wrapper. Node actually adds one underneath the covers when it loads modules. browserify does too when it combines modules. While it would be nice to not have the function wrapper in source form, it has these advantages:

a) Using code across domains is much easier: no need to set up CORS or restrict usage to CORS enabled browsers. You would be amazed by how many people would have trouble with this as it is a hidden, secondary requirment. So, they are developing just fine on their local box, do a production deployment, then things don't work. This is confusing and not obvious as to why it fails. You may be fine with knowing what to do for this, but the general population still has trouble with this.

b) Avoids the need to use eval(). Eval support in the browser is uneven traditionally (scope differences), and now with CSP makes it even more of a hazard to use. I know there has been at least once case of a "smart" proxies that would try to "protect" a user by stripping out eval statements in JS.

In short, the function wrapper is not the ideal, but it avoids hard to trace secondary errors, and it really is not that much more typing. Use the sugar form if you want something that looks like commonjs/node.

2) Possibility for configuration blocks: front end developers have much more varied expectations on project layout. Node users do not. This is not the fault of requirejs or other AMD loaders though. At the very least, supporting a configuration block allows more people to participate in the benefits of modular code.

However, there is a convention in AMD loaders of baseUrl + module ID + '.js'. If a package manager lays out code like this, then there is no config block needed in an AMD loader. volo does this.

npm could even do this to bridge the gap: when installing a package called dep, put it at node_modules/dep and create a node_modules/dep.js that just requires dep's main module. That would also then work for AMD loaders (if the modules installed were AMD compatible or converted) if the baseUrl was set to node_modules.

So, it is entirely possibly to stick with the AMD convention and avoid config blocks. Package manager tools have not caught up yet though. Note that this is not the fault of the core AMD module system. This is a package manager issue. And frankly, package managers have been more focused on layouts that make it easy for them vs. what is best for the runtime use and that avoid user configuration. This is the wrong decision to make. Making it easier for users and runtimes does not actually make it that much more complicated for the package manager.

On package managers:

It is important to separate what a package manager provides and what the runtime module system provides. For example, it is possible to distribute AMD-based code in npm. amdefine can help use that code in a node environment. Particularly if the dep.js style of file is written out in the node_modules directory.

I would suggest that npm only be used for node-based code though, to reduce confusion on what can be used where. Particularly given the weaknesses of node's module system for browser use. However, some people like to distribute code targeted for the browser in node because they like npm. So be it.

But also note that a strength for node-based npm use, nested node_modules for package-specific conflicting dependencies, is actually a weakness for browser use: while disk space is cheap for node uses, delivering duplicate versions of code in the browser is really wasteful. Also, there is not a need for compiled C code dependencies in the browser case. So some of npm's capabilities around that are unneccessary.

It would be better to use a different type of package manager for front end code that tried to reuse existing module versions installed, possibly warn the user of diffs, but if really needed, then write out an AMD map config in the case that it is really needed.

In closing:

My biggest complaint is that node explicitly ignored browser concerns when designing itself and choosing its module system, but then some people want to use those design decisions and force them on browser use of modules, where they are not optimal. Just as node did not want to compromise to meet existing browser uses of JS, I do not want to use a less-capable module system in the browser.

I am hopeful that ES modules will have enough plumbing to avoid the function wrapper of AMD. But be aware that ES semantics will much more like AMD's than node's. And the ES module loader API will be robust enough to support config options and AMD loader plugins.

But note: this is not to say that someone cannot use node modules with npm and browserify to make something useful that runs in the browser. Far from it. But it will be restricted to the constraints above. There are still wonderful things that fit in that box, so more power to them for constraining themselves to that box and still shipping something useful. There is just more to browser-based module usage though. And I do not want their unwillingness to address that wider world as a reason to accept less for myself. The good news is that the internet is big enough for both sets of users. So let's all hug and move on to just making things.

(note this is a gist, so I am not notified of comments. I may delete this at some point if I get around to doing something more polished for a blog post, or just do not want to see its rough form any more)

Am I right that your main complaint about using node-style require in the browser is lack of async dependency loading? That's easily solved in ES6 as node/browserify require() could be rejigged to use ES6 generators internally to do the async dependency loading, without losing the terse sync syntax.

@timoxley I don't think that was his main complaint. You can certainly build synchronous syntax together to make it work, but the semantics of how things load are different. I don't know if I can sum up his 'main complaint' into something small, but I think his last paragraph or two hits the nail on the head. There is a subset of things that just building from node-style modules will do just fine at. And there's probably some sort of hack to get most other things in (be it eval or CORS or whatever), but he says that he'd rather just create a module system with semantics that handle those features directly. (edit: im a bad wrighter)

The browserify file size issue is solvable with tooling.

I've had massive dependency trees in production because I use a lot of modules and have not run into any filesize problems. (except swapping out esprima with a regexp which saved 200kb of JS).

Even if you do. browserify will dedup for you. You can just macro npm install to be npm install; npm dedup. which solves the majority of the problems whilst still not running into version clashes.

What tooling does AMD offer to deal with file size? Other then there is only one version per name and good luck dealing with version conflicts. That just leads to "we are still using jquery 1.3 because one of our plugins breaks in jQuery 2.0" which is a different but worse problem.

new URL(rel, base) is now in Chrome dev channel. No more JS libraries needed to resolve a URL.

— Erik Arvidsson (@ErikArvidsson) October 19, 2013

@Raynos - Dependency resolution on the browser side can be complicated (and package managers are trying to solve this). But the idea of having every module have it's own copy of jquery/underscore/what-have-you that all gets compiled together for production is not an acceptable solution.

@justonwinslow that's exactly the problem npm dedupe solves, but the real issue is the fact everyone finds loading massive libraries that pollute the global namespace acceptable practice in the first place. The bigger problem is code organisation and encapsulation, not bytes across the wire.

@Raynos -- it is a different problem, but it is not worse. The world where we end up with 3 versions of jQuery because 3 of our plugins relied on different versions is a far more important thing to avoid on the client-side than the inconvenience of version incompatibilities. Even so, this multi-jquery-world is still possible with a few lines of config in AMD. No one would recommend it though.

AMD doesn't necessarily need explicit tooling to deal with filesize (though the community's rejection of the multiple versions of the same module is a feature in my eyes). The simple act of declaring modules asynchronously, coupled with providing a script-loader help relieve issues with filesize by allowing lazy-loading of modules with built-in, recursive, first-class support for resolving dependencies before executing (because it's just a module that hasn't been built yet!).

Tools like browserify allow you to break up your code into modules, but loading is up to you, and dependencies are expected to be resolved. There are a myriad of tricks that @jrburke covers in this gist that people use to get around some of the limitations of this method, but he covers some good ground on why they aren't enough for him, but might be for a lot of people, and he's cool with that.

@timoxley - npm dedupe solves the problem of loading the same version of a dependency more than once. It does not protect against the 3 versions of jQuery problem.

@SlexAxton the monolith is the problem, not the module system. The dedupe issue is somewhat alleviated by semver, i.e. if I have modules depending on both ~1.3.1 and ~1.3.2, they can happily share a dependency.

the only way to fix the 3 versions of jquery problem is by publishing individual jquery methods in the same way that lodash recently did

if the question is 'how do I support my legacy javascript application that uses monolithic libraries?' then requirejs is awesome. if the question is 'how do I use a build step to bundle together small, modular javascript libraries without relying on globals?' then browserify is awesome

Allowing 3 different versions of jquery isn't a bug, it's actually an awesome feature.
It means you can improve parts of your app, with out having to rewrite old parts that that already work.
your file may get a little bigger, and you may be able to measure that effect, but also, your app still works.
And you can refactor the old code to eliminate the duplication - at your leisure. If your module system only supports 1 version of each module, then there is a tremendous disincentive to making breaking changes. This is much more the problem that most people have.

It's easy to measure file size, and hard to measure the pain of legacy code, so it's file size that gets emphasized.

If/when file size becomes more important than your app working or not, then refactor, and maybe send some pull requests. If you really do depend on 3 versions of jquery, and your module system can't handle that - you'll have a broken app, which is much worse than a slightly slower one.

People do mention the async loading thing fairly often, and I have heard of people doing it with browserify. I would love to hear a concrete example of an app that uses async module loading,
and would have perceivable differences using async loading vs static. does any one have a link to such an example?

Regarding the use case of require(someVariable + '/other'), that is not a problem for your module loader to solve. Determining what code to load/use falls under the umbrella of polymorphism that, while not tied to a formal type system, is still a valuable concept/practice in JS.

As for doing things like parsing Coffeescript on demand in the browser, why would you want to do that? It seems obvious that if you are at all concerned with performance, precompiling your source would be step 1.

Do people use AMD in production without a build step? That was actually my biggest problem with AMD; I wanted to minimize the differences between the code I run in dev and production, and if I am building during dev why not use the system with terser syntax and access to npm modules?

I don't understand this argument. In the case where 3 modules require 3 different versions of a dependency they need that version and are likely to break with a different one. If they can all work with the same version then they should be requiring a similar version.

Is anyone actually arguing it's better to break under version discrepancies and make the user figure it out after debugging in order to maintain a smaller file size for code that is likely broken than it is to simply require the user to de-dupe their dependencies if the file size is large?

There is no use case I know of where it is better to be a smaller file size and broken than a larger one and working.

My biggest complaint is that node explicitly ignored browser concerns when designing itself and choosing its module system, but then some people want to use those design decisions and force them on browser use of modules, where they are not optimal. Just as node did not want to compromise to meet existing browser uses of JS, I do not want to use a less-capable module system in the browser.

I couldn't agree less with this :)

Node created the best module system it could for the largest audience and ecosystem. It didn't ignore much at all and, being that i was around when a lot of this stuff was being worked out, I can say we did think about differences and discrepancies between the way node was doing things and the way the browser was.

AMD was created by people with "module problems" in the browser. This is a self-selected group of people who are using enough JavaScript that they needed to modularize and built tooling and a community around that problem which traces its history all the way back to early Dojo. The people not represented by this group are all the people that drop jQuery and a few other script tags in their code and called it good.

If a few script tags "just worked" for you then you really didn't give a shit about AMD or modules at all. Node's module system is built for those people, so that they can modularlize, publish, and then take part in an ecosystem rather than improve the modularity of their silo.

Node's module system is not optimized for the 1% of people with the worst problems, quite the opposite. Node's module system is optimized to create an ecosystem of published modules that can work interdependently with the least amount of effort possible. In fact, more than just the module system but all the patterns in node.js try to encourage this.

Your point of view is, understandably, that of someone who "knows what you're doing." Most node modules are written by people who don't know what they are doing and they are used by people who know even less. Yet, it all seems to work. This is not magic or luck, this is by design.

You could just as easily have said that we "ignored" the concerns of people coming to node.js from Python and Ruby in that we were optimizing for things the Ruby and Python module systems failed at so we disregarded most of their decisions. The truth is that we ignored very little, we considered pretty much everything, and we always tended to air away from the point of view of "professional" developers and built for the people that weren't even using node.js or Ruby or Python or AMD because the barrier to entry was too high.

The success of this approach is viewable by the growth of the ecosystem which outpaces all of our contemporaries.

@mikeal, thanks for a look into the past of how node's module system got to where it is today. Fascinating!

If a few script tags "just worked" for you then you really didn't give a shit about AMD or modules at all.
...
The truth is that we ignored very little, we considered pretty much everything, and we always tended to air away from the point of view of "professional" developers

I think this paints a picture of only two types of developers. Those who are dojo-using Enterprise Scale Coders™ and those who are graphic designers reading Learn Javascript in 24 Hours.

In reality, I think there's a large group of work-a-day programmers out there for whom Javascript is likely just one of several languages they use. For those folks, understanding how to use a module loader beyond the novice level is not a waste of their time, even if they don't have to coordinate the development of hundreds of modules from dozens of coders.

The problems @jrburke raises are not contrived. Synchronous loading and heavy page-weight might be fine for a prototype or in early development, but eventually you want to alleviate that pain. How does node's require() grow with you and transition from novice to intermediate usage (whether by a "professional developer" or by a former novice who's gaining competence in the language)?

@SlexAxton

he world where we end up with 3 versions of jQuery because 3 of our plugins relied on different versions is a far more important thing to avoid on the client-side

That means one of two things. Either those 3 plugins depend on three different jQuery's with three different incompatible interfaces and you need 3 copies or bugs galor. Or those modules are buggy and should have their dependency changed to a bigger range. (Although the real solution is for them to depend on a 2kb instead of a 200kb module).

Obviously the first problem is subtle production bugs, the only possible way to solve it is duplication, if your tool can't duplicate then you have buggy code & lost.

The second problem is a bit harder, I have yet to be in a production situation where maintaining a fork with a more generious version range is a significant reduction in file size (i.e. a non micro optimization in performance). I'd argue that the need for such forks depends on the edge cases where file size matters (i.e. providing a 3rd party module for other web developers where file size matters, for an app a 2% reduction in file size is not a big deal)

For the many arguments about how 3 versions of jQuery is a desirable feature, I don't believe that any of you would ship an app that worked like this. Either way, it's the wrong argument to make that it's a good feature because it's entirely possible in AMD, just as it is in node+browserify. It takes a line of configuration, because no one thinks it's good default behavior, but in throw away code, I've certainly used it. http://requirejs.org/docs/api.html#config-map -- so this isn't some superior feature in browserify or node modules, most front-end devs just recognize that it's not a good practice, and therefore it's not default behavior.

To the point that has also been brought up that it's the fault of jQuery being monolithic that it doesn't work in the system, sure. I totally agree. I pushed for this in jQuery core during the jQuery team meetings and jQuery is more modular now (with AMD). I wrote the original proof of concept for a fully tiny modular jQuery 2 years ago, long before npm/browserify: https://github.com/SlexAxton/jquery/tree/modular/src/jquery/core -- https://github.com/SlexAxton/jquery/blob/modular/src/jquery/core/noop.js -- it's a good practice indeed.

However, I think that the only problem that this solves for browserify is that instead of having 3 versions of one big piece of code, you end up with 3ish versions of a lot of smaller code. I don't know what your node_modules folders look like, but even after a dedupe it's anything but minimal.

AMD isn't some sort of crazy one-off set of old java dev programmers that have big monolithic apps as @mikeal loves to paint it, that's such a weird stereotype of AMD. I never wrote big monolithic apps. AMD is much more popular than people are giving it credit here. I'm not sure why it's an argument that AMD is for big monolithic apps and node modules are for nimble cool quick coolguy apps, but you're just projecting. They're modules, guys. The npm ecosystem is pretty cool, but it totally works with AMD and a config file, so I don't have any issues with small discoverable modules in my coolguy non-monolithic apps.

Do people use AMD in production without a build step? That was actually my biggest problem with AMD; [cont...]

Absolutely not. But they don't need to be running a node server in order to develop their application. This is one of my beefs with npm/browserify. It assumes my stack is node. It's great for a node stack probably. Not that AMD also wouldn't be great, but AMD can be great in other stacks as well, while I've found browserify significantly more cumbersome (but possible, with the exception of a static file server). I love developing with serve ..

I wanted to minimize the differences between the code I run in dev and production, and if I am building during dev why not use the system with terser syntax and access to npm modules?

We don't build during dev but our code isn't any different between dev and production, it's just concatenated. That's why it's beautiful. There are plugins that allow you to do what browserify calls 'transforms' now, but for the most part, the code you load runs the same way it runs always. The wrapper executes, saves the function into a hash, and grabs its dependencies out of the hash. If ever the dependency isn't in the hash, it makes an async request and waits til it's in the hash.

People do mention the async loading thing fairly often, and I have heard of people doing it with browserify. I would love to hear a concrete example of an app that uses async module loading,
and would have perceivable differences using async loading vs static. does any one have a link to such an example?

Async loading is used in every AMD app that I've ever built. When you run the build, you can declare things to end up there or not. With this power, you can build up logical blocks of code that only need to be loaded in the parts of the app where you need it. For your concrete example, Bazaarvoice Ratings and Reviews loads on every product page on its clients' web pages. However, only some insanely small percentage of people ever click on the 'write a review' button. It would be silly for us to package that in the initial download, so we don't.

The simple code for this would look like this:

$('#writereview').click(function(){
  require('submission/form', function (SubForm) {
    new SubForm();
  });
});

It's async by default, it works just like a dependency. It's the same code that runs in devmode with the same loader doing the same stuff. The require.js runtime is the same as the require.js dev time. There aren't any surprises here.

Here's a gif in case that wasn't concrete enough:

http://alexsexton.com/images/js_app_deploy/delaypackage.gif

Regarding the use case of require(someVariable + '/other'), that is not a problem for your module loader to solve. Determining what code to load/use falls under the umbrella of polymorphism that, while not tied to a formal type system, is still a valuable concept/practice in JS.

There are two ways to parse your first sentence that have different points of view. I think you are saying that a module loader shouldn't have to solve this problem. I think require.js agrees and does not.

Node created the best module system it could for the largest audience and ecosystem.

I appreciate that you were around when this stuff was going down, but this just isn't true. I can't count the number of times people talk about how they didn't have to worry about the constraints of the browser. The fact that browserify didn't come until years later from someone outside of this original group only makes that picture more clear. Node modules get some things right in the browser because it's all javascript and it's a good module system, but it can't be backronym'd into being on purpose.

The success of this approach is viewable by the growth of the ecosystem which outpaces all of our contemporaries.

You keep showing that graph. I don't think it means what you think it means. That node is popular and has a module system does not automatically make that module system best, and it especially doesn't make it best for the browser. Not to mention that there are countless AMD modules in npm along with every other conceivable module type.

I don't think anyone will read much farther so I'll stop.

If I may paint my own projection: people who run node.js full-stack and completely buy into the node and npm ecosystem and also probably usually don't have that much to do on the front-end can and should use npm/browserify for their front-end module/build system, for anyone who needs to write a significantly complex application that does not need to rely on a compatible backend, there's AMD. AMD is for the web.

@maxogden:

'how do I support my legacy javascript application that uses monolithic libraries?' then requirejs is awesome. if the question is 'how do I use a build step to bundle together small, modular javascript libraries without relying on globals?' then browserify is awesome

I would phrase this as "if you want to use npm and node's style of module declarations, and package up some code to deliver to the browser, then browserify is a great option. If you need dynamic loading, or have other requirements, requirejs is a great option".

Assuming monolothic libraries are only used with requirejs, or that it cannot take small bundles of modules and deal with builds, are not correct assessments.

I can see where you may have made that classification by looking at some projects that have used requirejs. Traditional browser JS libraries may have packaged up a bunch of functionality for ease of distribution, but hopefully the blossoming of more JS package managers shows there are options now. And respect to npm for being a great example of a package manager. But requirejs is fine using lots of small modules, and there are build tools for the ecosystem too. Just small differences in code layout conventions and module style.

Great work on voxeljs and the open data initiatives, btw.

@dominictarr:

People do mention the async loading thing fairly often, and I have heard of people doing it with browserify. I would love to hear a concrete example of an app that uses async module loading,
and would have perceivable differences using async loading vs static. does any one have a link to such an example?

This will be common with MVC systems that want to load the next views on demand. The goal is to get fast startup times, and the fasted JS is the JS that does not get downloaded or executed until needed. There are web IDEs like Brackets and Ace that are bigger user experiences, so expect to find dynamic loading in them.

At the moment, I work on an email app for Firefox OS. Even though all the HTML/JS/CSS are local, multiple IO operations on the device can still be slow enough that some bundling for upfront assets are useful, but due to memory and CPU constraints, not all the cards for the UI should be loaded up front. While the code is not the prettiest -- my work for this app is a part of a morphing of older code that was not as modular, and due to diff reviews and other priorities has to go in phases, but some links that might be helpful:

I expect you will find a similar pattern with any MVC-backed app that uses requirejs. They may use routes or view/controller triggers to dynamically load the next bits.

I see your name pop up quite a bit in the node community, nice to converse with you online.

@bclinkinbeard:

I feel like we may have commented on an issue together, or read something similar from you before on the internet? In any case, your handle looks familiar, hello! To your question:

Do people use AMD in production without a build step? That was actually my biggest problem with AMD; I wanted to minimize the differences between the code I run in dev and production, and if I am building during dev why not use the system with terser syntax and access to npm modules?

It is very common to see builds with AMD systems, but also dynamic loads of sections on demand. The point about not needing a build system to use the module system is to highlight that the all the front end module use cases are solved correctly. Dynamic loading of the next view in a web app being a very common use case.

So feel free to always build if that is what you prefer. Hopefully the above responses also help highlight the non-build, or non-single file build, use cases.

@mikeal:

Is anyone actually arguing it's better to break under version discrepancies and make the user figure it out after debugging in order to maintain a smaller file size for code that is likely broken than it is to simply require the user to de-dupe their dependencies if the file size is large?

Agreed. Any comment I made in this area is to focus the responsibility on the package manager to help the user with the issue. npm does not do this by default, it requires a separate dedupe module for the user to run, as I understand it. I would prefer a front end package manager to incorporate the conflict resolution and deduping up front, particularly if it means it needs the user to sort something out that it cannot do itself. But I am sure there are people that are fine with npm's behavior. While it is perhaps more steps than I would prefer, they can certainly get it to work.

Node created the best module system it could for the largest audience and ecosystem. It didn't ignore much at all and, being that i was around when a lot of this stuff was being worked out, I can say we did think about differences and discrepancies between the way node was doing things and the way the browser was.

Thanks for pointing out a weak phrasing on my part. "Ignored" is was too strong a word. "Discarded" is probably the better term. Because of what was discarded though, the CommonJS-derived system has weaknesses when used in the browser. It does not mean it is totally useless in the browser, but does have weaknesses.

As @donspaulding mentions, your developer breakdown does not feel right. I think you may be attaching too much weight to where I came from, or the the others involved with the AMD effort, primarily from the Dojo community, and the perception around that toolkit.

The choice for the function wrapper, to avoid the eval and CORS issues was to precisely make it easier for all the amateurs. They are the ones that would get bitten by the hidden costs of trying to use a CommonJS system (or even the old Dojo system) directly in the browser.

Reducing support requests was a factor in the AMD design, and the choices were made because Dojo's older system was too clever, and we saw the the CommonJS path taking a similar route. The main point though is AMD was driven by real use cases, purposely made more literal, and then evolved based on actual use and broad experience in the field.

Node did not have the same use cases though, and it got to choose its package management system and file layout conventions fairly early, so that really helps remove a lot of choices for developers. That is a good thing. More people using the same convention that is enforced by the environment.

Retrofitting modular use in the browser is harder given different communities that work in it have already established conventions. So it is closer to a bazaar than to node's cathedral (a very small cathedral that is community-built and allows for community-attached bazaars -- and I love to visit it when doing command line tools and servers).

I am hopeful though as more front end communities, and their package managers, get more used to the baseUrl + moduleID + '.js' convention for code layout, it can get a lot better for the front end, and I believe AMD loaders have really helped make the case for modular front end code. It can be a perception change from the old script based system though, so naturally there is friction when learning new ways to do things.

The success of this approach is viewable by the growth of the ecosystem which outpaces all of our contemporaries.

Contemporary server/command line systems, yes. Front end web development has many continents to it though, and many that are can be hard to measure given the wild west of its past. The popularity or growth of node's continent does not mean it provides adequate tooling for other continents. Luckily the internet is big enough for all the continents. It is not a zero-sum game.

If node/npm wanted to expand better to the front end continents, I can see where there could be some small adjustments or options to how npm lays out files, and there is nothing preventing AMD modules being distributed in npm, they can work via an adapter when used directly in node.

However, I would not strongly advocate for that route, because node and npm have done well to choose to limit itself to its immediate concerns. Just do not expect those choices to work well on other JS continents, and expect others to point out the weaknesses when used in other environments, and build other systems.

And as mentioned before, node+npm+browserify still solve browser coding issues for some people. For me, it is the same class of browser use cases that the Rails asset pipeline solves, so therefore not really matching my needs. But I completely understand it is enough for some people. And at least it uses JavaScript!

BTW, I really appreciate your efforts around JS community conferences. Not a skill I have, but community is very important, and it is great to see people putting energy into small, respectful conferences.

AMD is for the web.

AMD via a requireJS solves a problem at the library consumer level ,it doesnt help library designers to write better libraries. On the other hand.Browserify is for library designers , but not for library consumers.

Anyway , AMD can only be solved by those who write the JS spec, not by a third party library that is not a standard. It's exactly the same problem with custom class implementations where different implementations dont work together and everybody thinks his way is the right way, even with promises , why should I follow a non official standard for my promise library?

That's why we have spec ,dont we?, so everybody agree on something and code the same way. Javascript lacks specs on these issues, i mean finalized specs , not drafts.I dont care about drafts.

Yet in 2013 ,almost 20 years since JS creation , we still dont have a standard way to load modules? The only reason we are having that discussion is because of the lack of spec*. It's not our job as developpers to fix a language, and DIY solutions are not "durable" ( cannot work on the long run ).

`* : in the browser. it doesnt matter what javascript looks like on the server.

For the record, async loading can be achieved with browserify. You just create seperate bundles and have them communicate through an application specific global bridge.

It's not a first class experience. However there is a module that makes browserify's require a first class global bridge, assuming you use your async script loading mechanism of choice (adding a script tag is trivial).

This is how it should be, async loading is an edge case optimization that's actually quite easily to do transparently with a bit of configuration.

async loading is an edge case optimization that's actually quite easily to do transparently with a bit of configuration

Not true. It is not an edge case. I'd even argue it is the primary case.

Any project of reasonable size benefits from async loading. And it should be done painlessly (i.e. not a bolt-on hack). Every single project I've worked on could have been better with a) small modules b) asynchronously resolved+loaded+cached bundles. So far, I've never managed to achieve both without AMD.

At the moment, I'm working on a widget thingie - we dump JS on other people's websites. Therefore I use super-micro (nano?) libraries and browserify - AMD has too much syntactic sugar for that. So I manage to fit the whole thing into 20-25KB, while still having maintainability. This is an edge case. In most other cases I used to have hundreds of KB of code (excluding libraries) in hundreds of modules. You can't manage that with "it's trivial to inject a script tag" philosophy.

I've even had a case where we started off with a 900KB single file when minified (the approach was exactly this - let's concatenate all of our small modules and trivially inject a script tag). And mind you, the size of the codebase does not necessarilly mean that it's "bad" code. Some problems and systems are complex and complicated. They need that much code. Which VERY quickly starts to hurt when you send it over 3G.

It's all nice and dandy in the startup world - "oh we'll just build small modules". The reality is that 99.99999% (if not more) of all the code ever written is legacy code. It's all great when you "fail early", i.e. before you build a system which actually does a LOT of things. The reality is that businesses need to run the same codebase for years - and that needs to be managed, maintained and needs to run in the browser and load fast. And AMD solves a lot of that.

I think Browserify is a step in the wrong direction for front-end web development. File system dependencies like node.js have almost nothing to do with the web browser run-time. To me it seems like nothing more than a case of "if all you have is a hammer, everything looks like a nail."

I get it, people got REALLY in to 1970s style Unix programming over the last few years. Everyone switched to Vim or Emacs. Ya'll really love your file descriptors and pipes! And that's absolutely fine, for code running in a Unix runtime.

However, front-end javascript does not run in a Unix runtime. It runs in the browser. It is basically in a completely different universe, a universe that has no concept of files and owes more to predecessors like Self and Smalltalk than to anything Unix related. Self and Smalltalk, while being inspirational to JavaScript in language semantics and philosophy, also share very similar runtime environments, where files and "code-compile-run" have no place.

Having a dependency management system based on Unix and files is basically like flying the airplane while sitting on the wing, well outside of the cockpit.

I've been working on a project built on top of RequireJS called lit. The current alpha version is up at http://www.corslit.com.

The basic premise is that modules are written to and read from the network. Nothing ever hits your filesystem. Hell, nothing ever hits a filesystem anywhere, as it is store in memory in Redis.

Check out this page: http://www.corslit.com/williamcotton/FutureTwin:Bolinas

It should look very familiar to AMD. However, because the function is serialized and stored on a remote server, a simple call to lit.build() instantaneous turns it in to this: http://www.corslit.com/v0/williamcotton/FutureTwin:Bolinas-v0.0.5.js

I have found npm to be very inspirational in this process. I love that lib sizes are so small, which I feel is a reflection of the ease of publication. However, there is still overhead in the form of git repos, package.json files, READMEs, submissions to npm, etc, etc... it may not SEEM like much but something like lit takes orders of magnitude less time to publish, seconds as opposed to minutes.

This drives down the size of published modules to something more atomic, the function level. There is an incredible amount of inefficiency at the function level. Every JS lib has a deep clone, an array helper, a UUID generator, a cookie reader, an XHR wrapper, etc, etc. Sure those only take minutes to write, but all of those minutes add up! Also, it leads to silos of code. Why don't Angular and Ember share more resources? Why do they have function after function that do the exact same thing, written twice in each code base? Maybe because as it stands now, publishing a 5 line function to the community seems like too much work for so little?

I really feel like git itself is the culprit for a lot of this. Git was written with a monolithic codebase in mind, the linux kernal. It was NOT written for how we're writing applications these days on the cutting edge. A node projects is basically 2% custom code and 98% code coming from other authors. We've already blended the distinction between "version control" and "package management" in our processes, but our tools are not designed with that in mind.

The npm build process is very slow because of all of the network IO. With lit it is damned near instantaneous as the code is built right out of memory. It fully allows for multiple versions of the same module, in a manner like npm.

Also, lit isn't gunning for npm. It is gunning for Bower. I can't for the life of me understand Bower. You use a package manager (npm) to install a package manager (Bower) that is basically just the first package manager. It doesn't DO anything remotely related to the browser, it just downloads things from GitHub. Why not just use npm for that if you're wanting to do things the Unix way? What does Bower do at all?

lit is NOT ready for primetime yet. However, I'm doing most of my work in it. It has even fully self-hosted for front-end code: http://www.corslit.com/williamcotton/lit

You guys should really check out Self and Smalltalk. And not just the languages, but the systems. They don't make sense when separated from their systems. We need IDEs for JavaScript that are built in and run in the web browser.

It would be better to use a different type of package manager for front end code that tried to reuse existing module versions installed, possibly warn the user of diffs

Bower does what you ask for. It also allows you to force the latest version, for better or worse.

I believe you are right, npm is not working for the browser. That's why I usually add a bower.json as well as a package.json to any module that is supposed to be used in both environments. This way one gets a warning when there is a version conflict and it is possible to provide different main files if necessary. With the bower list command it is also pretty easy to concatenate all the files for the client.

On the subject: I can't see how these two are competing technologies. If we are talking npm, it is just a package manager, and require.js, at least to me, serves as a module loader.
Then there is Node.js, which comes with a module loader, that embraces a common API for modules, while the browser doesn't have one at all (yet).

If I needed to describe my ideal module system right now, it would contain...

  • a module loader for each environment
  • one module API
  • a thing that helps me fetch my dependencies

Though, again, I prefer a module fetch thing for each environment, Node.js and the browser

I imagine, this can be accomplished by throwing bower, npm, commonjs and require.js into the mix. It involves some build steps though.
For module loading in the browser, I am also a big fan of the idea the Google guys presented at last year's jsconf.eu.

My thoughts take into consideration, that people want to be able to use modules in both environments. The ideal module system might look different, if one works in one environment exclusively. For Node.js that would be pretty obvious.

@SlexAxton and @jrburke (I think we chatted on Twitter once or twice), thanks for your follow up. I think you both made lots of reasonable points. I still prefer the Browserify approach, but if I am being honest it is largely that: a preference. Obviously, both AMD and Browserify can lead to failure or success, and the defining factor is the skill and care of the developers behind the project.

Something I encountered during my time with AMD, which I think may contribute to the perception that it's associated with monolithic libraries, is that very few libraries seem to properly support the format. You therefore end up having to use the config to wrap a lot of things that are defined on window, which I know think hope we can all agree is a bad thing (attaching things to window, that is). I (like to) think that the CommonJS format encourages people to write better code, or at the very least prevents them from polluting the global scope.

I also don't really buy the complaint that using Browserify forces you to run Node. I mean, it does of course, but are you really not already using it for Grunt and the like? I can't imagine doing modern web development without the arsenal of tools we have today, and almost all of them run on top of Node.

Lastly, just to provide an anecdotal contradiction to @SlexAxton's projection, we are using Browserify on my current project with great success. We are building a SPA using AngularJS, jQuery and D3 that is very heavy on data visualization, and talks to a Java back end running on AWS with PostgreSQL. I don't know the exact number of JS modules or LOC, but I would ballpark the modules around 200. We build Angular, jQuery, D3, underscore, etc. into their own libs.js bundle, which is 102 kb gzipped. Our app.js bundle is 35 kb gzipped. Not incredibly tiny, but it runs very well on everything from a first generation iPad mini to desktop browsers.

@substack is your improv comedy troupe stuck in the 70s as well?

I kid, but if you want to meet up over a beer and talk about a bunch of shit like this sometime, I'd love to! I live in SF and I'm not even afraid of visiting big bad old Oakland.

I'm serious, let me know!

I'm starting to see people talking past each other and I think I know why.

Commonly used browser libraries have, traditionaly, been rather large. There's a lot of reasons for that, but we can admit that the average library was significantly larger than a typical node module.

Node modules are very small, obviously.

The likelihood of a version change breaking dependencies or applications is directly proportional to the size of that library.

When a node module's minor version changes the assumption is that you should make sure everything works before depending on a new one because the likelyhood of that change effecting your code is much larger than a minor release of jQuery which people update all the time without even checking.

If your goal is to grow an ecosystem as large as node's then you'll need to assume that modules are smaller and that changes are more likely to break.

appreciate that you were around when this stuff was going down, but this just isn't true. I can't count the number of times people talk about how they didn't have to worry about the constraints of the browser. The fact that browserify didn't come until years later from someone outside of this original group only makes that picture more clear.

You misunderstand what node's module is and when it was formed.

The earliest versions of node took their module system directly from CommonJS. Yes, CommonJS was created outside the constraints of the browser primarily by the developers of Narwhal.

Even during the earliest days of npm @isaacs was working with @ry to tweak the module system. In fact, the way @isaacs ended up becoming one of the earliest "committers" was by taking over responsibility for node's module system and by late 2010 npm and the module system were being developed in tandem.

Browserify 0.0.1 has a version target of 0.2 which means it was created before the module system truly became "node's". @substack was also involved in the changes in the module system, mostly re-actively since most changes rippled through browserify.

Most of the complaints I see here, especially those relating to versioning and localization of deps are all decisions made long after browserify was created and are unique to node and in some cases even cause spec incompatibilities with CommonJS. In fact, without browserify I think some things may have been very different but browserify was showing that browser packages could be built from node's module system without changes.

So yes, the browser was considered. Remember that during this time @isaacs and I were still in CommonJS while they went down the AMD rabbit hole and the first version of the npm registry was even described as a standard for CommonJS around the same time people were arguing about which AMD spec to go with.

Node's module was free not to ignore the browser or to focus on being a systems module system, Node's module system was free to focus on being the best module system for modules.

That is why it has succeeded, the first and most important customer of a module is modules and node continues to be the easiest way to write, publish, and consume modules. Every other consideration is secondary.

You keep showing that graph. I don't think it means what you think it means. That node is popular and has a module system does not automatically make that module system best, and it especially doesn't make it best for the browser.

If I may paint my own projection: people who run node.js full-stack and completely buy into the node and npm ecosystem and also probably usually don't have that much to do on the front-end can and should use npm/browserify for their front-end module/build system, for anyone who needs to write a significantly complex application that does not need to rely on a compatible backend, there's AMD. AMD is for the web.

I have a lot more graphs now, and a lot more data :) I've been ripping data out of GitHub for a while and it says a lot more than just the module counts do.

The first thing it shows is that there really isn't a node ecosystem and a browser ecosystem. A huge portion of the engagement is in browser tools and modules and the people stretch between between different parts of the ecosystem.

Once you realize that node, on a community level, is not separate from the web but that it is of the web and has become a vertebra in the spine of web development these arguments become very very moot.

What is the best module system is subjective, node's is the fastest growing, not just in comparison to other JS module systems but in comparison to all module systems. Adoption matters.

We aren't fighting for what system people use to package modules in to their apps, and RequireJS certainly has more features to offer here than browserify, we're fighting for how people define their modules. In that, node's module system is winning, and all browser tools would benefit from adopting support for modules built in it. For its part browserify will consume a module written to any modern standard.

@jburke

Thanks for the kind words :)

I mentioned this in my reply to Alex but I don't believe there are continents of "frontend" and "backend" I believe we are instead awash in the sea of the web.

Compatibility breeds growth. The more compatibility you can ensure between your parts the more you'll grow new parts. IMO node already won whatever fight we had over how modules get defined. I won't begrudge people who choose to do otherwise but any tool that is built for the benefit of any web developer should support modules written in the fastest growing format in addition to whatever other format it wishes to use in order to extend their functionality.

The jQuery argument is very difficult to follow. In many senses jQuery is a large, framework-like choice. It is the equivalent of an environment. Small modules shouldn't really require jQuery at a version as a dependency. You either write a jQuery plugin or you export a function that is passed jQuery as an argument. Fishing the jQuery token out of the browser global scope and passing into the module is even far better than requiring jQuery anywhere but at the top level of your application (not that I would recommend either). Please extend this argument to any other monolithic environment that may cause file size issues ;)

I also agree with @dominictarr in that freedom to refactor at your leisure is an important feature if you are going to encourage developers to refactor at all (that and a decent test suite).

But I digress. Having used browserify extensively in production, it pays to keep a track of your dependency tree simply because the optimization problem is something that you have to take charge of. I think this discussion is primarily focused on browser optimization and the freedom to make choices about these optimizations.

As far as I am concerned, a require statement is communication of intent, and require('module') is a pretty low complexity solution. It doesn't state how that thing is downloaded/bundled under the hood. It's not designed to optimize for the browser in the way you would like it to under certain conditions, but it's really out of the scope of the module system to provide this for you. If you wish to criticize browserify on implementation detail, it may be more fruitful to record these problems as github issues so they can be addressed.

Optimization is simply about hand crafting under particular use cases. It's one of the most challenging and rewarding aspects of browser development imo. It really is worth addressing anything that compromises your freedom to optimize the code in the way that you would like.

Was anyone else curious about the size of the animated gif posted earlier in this thread? It's 1.7MB. Think about that for a moment. :)

The initial draw to requirejs for me was the idea of lazy loading. I learned several lessons while building a half-dozen websites in requirejs.

  • defering loading over http has a cost you don't hear a lot about; having to stop mid UX and wait for your bytes to load before the app can continue. I'm sure with a large enough codebase, the up-front cost of loading the whole pile of javascript overwhelms the lazy loading, but with a combined/minified/gzipped bundle, that takes a pretty dang large amount of code. The time spent with multiple network requests in-flight for part of your codebase can easily overwhelm just getting the whole thing up front.
  • there's significant complexity in the requirejs config and runtime. I mean c'mon, the fact that the sanctioned optimization is to replace require.js with almond.js or something lighter at run time is a pretty clear admission of this by the authors, IMO.
  • It became really challenging to do some of the more advanced google and yahoo page speed optimization recommendations. For example, let's say you want to rename your modules based on an md5 hash of their contents, and have that still work with what r.js produces. This is annoyingly complex.
  • when I first started using modules I thought of browsers and node.js as living in different worlds. I think this is a leftover from working with old school traditional page request sites for a decade. But that line is blurring and disappearing. There's not a lot of code that won't run in both.
  • Whether you're using less, or sass, or coffeescript, or minification, or uploading to a cdn, or running unit tests, or optimizing your images. Those are all build steps. Any time you need to transform your assets in some way, you have a build step. Things get so much better when you can accept that and embrace it. I'd wager that part of grunt's popularity is a wide-scale realization of this fact.
  • When you realize you've had build steps all along, building in dev isn't a scary thing to avoid anymore. In fact you embrace it because your dev/staging/prod environments are now even more similar. And you get nice things like live reload for free, and other activities that you want to tack onto your change hooks.

The end result was a switch to browserify, because the tradeoff in simplicity, improved site responsiveness, and ease of doing more advanced deploy optimizations went up. It's no coincidence that end users that know nothing about code were remarking how much snappier the site was. Before and after comparisons on google page speed tests were bumping the scores from 70-80% to as high as 98-100% in many cases too.

@mikeal

I mentioned this in my reply to Alex but I don't believe there are continents of "frontend" and "backend" I believe we are instead awash in the sea of the web.

This would be much truer if the weaknesses mentioned in the gist were addressed. For instance, I expect ES modules, since they do have allowances for those issues, to bridge the continents better.

Compatibility breeds growth. The more compatibility you can ensure between your parts the more you'll grow new parts. IMO node already won whatever fight we had over how modules get defined.

By solving the problems mentioned in this gist, and using the same module system in the browser and in other JS envs, that will give even greater compatibility and greater growth.

I know it may be difficult for you to see given your heavy involvement in node and its ecosystem, but AMD usage is very strong. For me, this proves out that these problems are real. I am sure you can make a case for AMD usage as not being as visible as what you see in node, but then node does not try to solve the same problems either.

You can mitigate those problems by using more tools and transforms. For me, that complicates the solution.

ES modules, however they turn out, will not be be a direct copy on Node's system, and will specifically address the weaknesses mentioned above. Because they are real problems. In that respect, node did not win the fight. It certainly has some great usage patterns to share though.

I won't begrudge people who choose to do otherwise but any tool that is built for the benefit of any web developer should support modules written in the fastest growing format in addition to whatever other format it wishes to use in order to extend their functionality.

The problem is that Node's module system cannot actually express equivalent module desires. There is a lot of overlap, but this gist is about where it falls short.

If you or other people in the node community get frustrated when asked about AMD use in node, here is a short, diplomatic answer that lets you move on to other conversations that you would rather have:

"No plans to change Node's module system. Wait for ES modules if you want capabilities outside of Node's system. AMD modules could be used in Node, but it requires userland modules and adapters to work. You will likely encounter less friction with other Node module usage if you stick to the basic Node module system. If you want to distribute code targeted for the browser using NPM, NPM lays out code according to Node's needs. You will need to use other tools on top of it to convert the code to a form or layout usable in the browser."

This would be much truer if the weaknesses mentioned in the gist were addressed.

If a few use cases and bugs would make or break this kind of thing, we wouldn't be using the web at all :)

ES modules, however they turn out, will not be be a direct copy on Node's system, and will specifically address the weaknesses mentioned above. Because they are real problems. In that respect, node did not win the fight. It certainly has some great usage patterns to share though.

ES modules will support node modules. It will support them directly as a result of node's tremendous growth and until that growth was apparent ES modules had not intended to do so and early drafts were incompatible.

Again, adoption matters. As a module format node modules are the dominant pattern.

In the future there will likely be more loaders of node modules, including ones that work w/ ES6's module loader, but that doesn't mean that ES Modules will grow as large as node as the format used by module authors. In addition, tools like browserify will likely support modules written to the new ES Modules format in addition to their support for node's native module format.

If you or other people in the node community get frustrated when asked about AMD use in node, here is a short, diplomatic answer that lets you move on to other conversations that you would rather have:

We don't really need an answer for that because this isn't really an issue. People complain about lots of things in node but the module system is rarely one of them. Anyone asking that node switch or make breaking changes to the native module format would be quickly dismissed, the size of the existing ecosystem is far too large to break compatibility at this point. Many ideas of varying quality are dismissed quickly nowadays because they would break compatibility which is no longer acceptable.

AMD support was asked for, and added, in tools like browserify.

I don't know why you continue to insist that these problems must be addressed by the module format. They have been addressed in tools and loaders built in node, quite a few of them actually. Now, you may prefer your solution and the one you have built may address many of these issues together in an alternative module format in addition to your tooling that consumes it. However, building things together than can be build apart is not the node way. There are modules that fix many of the issues you raise. So long as those tools exist in the ecosystem this is not a problem for core and thus not an issue to be resolved by altering the module format.

In terms of duplicate module loading and version handling this is actually handled by npm, which is a module, and you could write an alternative to it that still supported the module format used by everything in the npm registry.

All of the issues you have are solvable on top of the node ecosystem, I know that isn't how you've chosen to solve them but claiming that node must change its module format in order to come to the same level of support seems absurd.

I don't agree that the issues you are bringing up are as widely held as a blocker as you do. They are certainly valid cases and are problems that people should pick a solution for, of which there are many. In having such a vibrant ecosystem I've come to expect that any problem will be resolved by someone relative to its importance. Sometimes that person is me, most of the time it is not.

File size is certainly an issue, many ways to solve it, including removing duplicate deps of differing versions. I work on a very large app, this is a real concern for us, but when I add a few deps i run dedupe and disc and I resolve this kind of thing. It's hardly worth it for me to abandon the dominant pattern and ecosystem to avoid running a few commands and poking at a visualization, nor is it rational for me to suggest that node should alter its module format and npm's versioning strategy so that I might avoid doing so.

@mreinstein we actually don't "build." Instead we put all the buildy code in the route handler and cache the resource in memory indefinitely.

This means that dev mode is just a flag that turns on source maps and avoids minification. It also sets up a file watcher that flushes the cache on any change. We found that reducing the differences between dev mode and production, as well as removing a step people might forget to run in either dev or deploy, reduced errors and bugs.

@mikeal to be fair, there is one problem not solvable within Node's authoring format, which @jrburke has emphasized several times. That problem is wanting to load modules cross-origin without CORS. You need a JSONP-like function wrapper for that, which AMD provides and Node's authoring format does not.

(I am actually unclear whether ES6 modules will support CORS-less cross-domain loading; I recall a few confusing discussions but not a conclusion.)

@mikeal what if npm allowed for the ability to specify the local variable names for the dependent modules? so like, instead of having to explicitly:

var someModule = require("someModule");

One could in the package.json file do something like:

localVariables: { someModule: "someModule" }

where it is referencing the thing from the dependencies hash?

That way, some mechanism can choose to write to "CommonJS" style and some other mechanism can write to AMD style.

Current modules would continue to work just fine and authors who are interested in more easily supporting AMD can rely on a built in mechanism.

The thing that really grinds my gears the more time goes on is that all Node.js really needed to do to be “compatible” with basic AMD modules was to expose a define function that would read the dependencies and consume the factory function. The same node_modules filesystem module lookup system could have been used, the same module cache, the same configuration, and people that wanted to write modules only for Node.js could write them with the less unfortunate syntax they use today. The only practical difference in doing this would be that AMD modules become easier to use within the Node.js ecosystem because they Just Work like any other Node.js module, with the added benefit of also working natively in a browser. The Node people handle dependency conflicts with more specific node_modules directories, the browser people handle dependency conflicts with AMD map, and everyone wins (more or less).

Instead of doing this, libraries like Dojo that try to provide useful, standard, cross-platform code and utilities without requiring any compiler or middleware (since there are, sadly, still many groups that develop primarily or exclusively against e.g. Java backends) are stuck either adding even more dependencies and even more boilerplate to every single module, or having multiple releases (which somehow invariably confuses people), or forcing users to commit to loading their application in Node.js in a totally different way to the way applications normally load in Node.js.

Being one of only a small subset of people that has probably ever experienced a true full-stack, cross-platform JS library, it makes me feel sad that the barrier to adoption is so high that I can’t ever hope to compete with the contemporary Node.js “alternatives” in popularity, in large part because Node.js authors decided against allowing the AMD syntax to be supported as a first-class module format.

Oh well. Come 2020, we’ll probably be able to use ES6 modules everywhere. Probably.

I am presenting on this very topic at CascadiaJS next week so thank you, everyone, for all of this fantastic content. :)

If anyone wants to debate this more in person, I'll be in Vancouver soon and happy to buy a round or two.

Cheers!

That problem is wanting to load modules cross-origin without CORS.

Can't that same library that is already exposed via the web be made available an npm/bower/component/whatever package, and pulled into your project instead of via CORS? To me this exemplifies the differences in philosophy between the 2 projects; RequireJS's goal is to provide a solution to every conceivable way that you might possibly consider declaring dependencies. Browserify doesn't support as many of these cases, but does so without incurring the technical complexity/debt.

I don't pretend to speak for everyone. For me (and probably some others,) I'm not willing to make that tradeoff. For example, another edge case that was brought up earlier: Could I design my apps to use something like require(prefix + "/som_module"); ? Of course I could. I suspect that if I saw this kind of thing in someone else's code that I'd have to maintain, I would not be happy, trying to track down how prefix gets assigned. And the fact that it can be assigned in many convoluted ways ensures that some idiot will.

@csnover it is not "the node way" to add things to core which can be accomplished by modules. Tools like browserify support AMD, they live in the ecosystem.

Being one of only a small subset of people that has probably ever experienced a true full-stack, cross-platform JS library

You are not part of a small subset. My company is full stack JS, most of our libraries run in node.js and in the client. Everything @substack and @maxogden do is the same. The entire voxeljs ecosystem is built on dual purpose libraries including all the ndarray modules.

You are only a minority in that you are doing this in a smaller ecosystem. Most of the node community is writing things that work in both places, most of the modules in npm work with browserify. This is the world we live in, you might consider joining us :)

@mikeal, I don't like answers that essentially say "our ecosystem is big, you should suck it and do things our way, even if they're not ideal". It's a cop out, and basically saying that because it's a problem you don't care about, no one else should care. Claiming that you're right, because you're popular, isn't useful. What is your actual technical merit for arguing against what @cnsover has said? Frankly, the JavaScript module loading ecosystem existed far before Node.js and npm. That you choose to ignore Dojo and the painful lessons we learned, and later AMD, is of course your choice, but stop insulting us by saying we're a minority. Browserify does not solve the problems we need to solve every day for our users.

Hi @mikeal,

it is not "the node way" to add things to core which can be accomplished by modules

In my response I think I was talking about “the node way”, where Node.js intentionally decides not to make any simple changes to core (and in fact, actually made such changes once but then reverted) to improve compatibility and how it negatively impacts users. You are welcome to look at it as a more positive thing, but as an AMD user, this is the perspective I have. I like Node.js, but it’s very frustrating to me because of this singular problem.

Maybe I also do not understand what “core” Node.js is, but it seems to me to include many things that could be provided by modules—crypto, ssl, http, url, punycode, and on and on. Include the fact that you can add C++ module extensions to V8, and Node.js “core” would pretty much be nothing but V8 itself if it followed “the node way”. So I obviously do not understand this perspective, except insofar that it maybe helps provide a convenient excuse to not be compatible with an inferior (but necessary for async platforms) module format.

You are not part of a small subset. My company is full stack JS, most of our libraries run in node.js and in the client.

Relative to the number of JavaScript developers there are out there, it is practically a rounding error. Node.js developers are themselves a fairly small subset of the JavaScript ecosystem.

This is the world we live in, you might consider joining us :)

A world in which responsibility to pragmatically improve cross-platform compatibility is abdicated not really a world I want to live in. Thank you for the offer, though.

@domenic ES6 modules will support cross-domain loading without CORS. Scripts loaded in this manner won't go through all of the Loader callbacks, since that would expose their source. But they'll at least still load.

Please sign in to comment on this gist.

Something went wrong with that request. Please try again.