Skip to content

Instantly share code, notes, and snippets.

@jrburke
Last active December 26, 2015 20:19
Show Gist options
  • Star 33 You must be signed in to star a gist
  • Fork 3 You must be signed in to fork a gist
  • Save jrburke/0479f25edfc6bb043ddb to your computer and use it in GitHub Desktop.
Save jrburke/0479f25edfc6bb043ddb to your computer and use it in GitHub Desktop.
Again with the modules

Doing this as a gist, because I don't have time for a polished post, I apologize:

If you had to define an optimal module system that had to work with async, networked file IO (the browser) what would that look like? It would not be node's system, for the following reasons:

  1. the require('') in node is very imperative. It can be called at any time and mean a module should be synchronously loaded and evaluated when it is encountered. How would this work if this call is encountered in the browser case: require(someVariable + '/other')? How would you know what module to include in any bundle? For those cases, you should allow for an async require to fetch, and leave dependencies that can be bundled to the require('StringLiteral') format. Node has no allowance for this, and browserify will not do a good job with those types of dependencies. It will need hints, or explicit out-of-module listing of what to include.

Once you have an MVC system in the browser, it will be common to delay loading of a view and its controller until it is needed, and rely on routing via URLs or button actions to know what module to imperatively ask for next. This works well in AMD since it has a callback-style require for this. This really helps performance: the fastest JS is the JS you do not load. This is a very common practice in use with AMD systems. browserify cannot support it natively. The suggestion when using browserify is to choose your own browser script loader and figure out the loading yourself.

  1. Similarly, since there is no synchronous module fetching and execution in the browser (at least it would be madness to want to do that), you cannot reliably preload a modifier to the module system so that it takes effect for the rest of the module loading. Example in node is requiring coffeescript in your top level app so that any .cs files are considered in module loading.

Instead, AMD has loader plugins that allow you to specify the type of resource that is being requested. This is much more robust, and leads to better, explicit statements of dependencies. Plus, loader plugins can participate in builds. This is better than the browserify plugins for this type of thing because of the imperative nature of JS code: fs.readFile(someVariable + 'something.txt').

Static declaration of dependencies is better for builds. So if you like builds, favor a system that enforces static declaration, and has an async form for non-declarative module uses. ES modules will go this route, and explicitly not use node's module system because of this imperative dependency issue that node has.

The parts that are ugly about AMD:

  1. It explicitly asks for a function wrapper. Node actually adds one underneath the covers when it loads modules. browserify does too when it combines modules. While it would be nice to not have the function wrapper in source form, it has these advantages:

a) Using code across domains is much easier: no need to set up CORS or restrict usage to CORS enabled browsers. You would be amazed by how many people would have trouble with this as it is a hidden, secondary requirment. So, they are developing just fine on their local box, do a production deployment, then things don't work. This is confusing and not obvious as to why it fails. You may be fine with knowing what to do for this, but the general population still has trouble with this.

b) Avoids the need to use eval(). Eval support in the browser is uneven traditionally (scope differences), and now with CSP makes it even more of a hazard to use. I know there has been at least once case of a "smart" proxies that would try to "protect" a user by stripping out eval statements in JS.

In short, the function wrapper is not the ideal, but it avoids hard to trace secondary errors, and it really is not that much more typing. Use the sugar form if you want something that looks like commonjs/node.

  1. Possibility for configuration blocks: front end developers have much more varied expectations on project layout. Node users do not. This is not the fault of requirejs or other AMD loaders though. At the very least, supporting a configuration block allows more people to participate in the benefits of modular code.

However, there is a convention in AMD loaders of baseUrl + module ID + '.js'. If a package manager lays out code like this, then there is no config block needed in an AMD loader. volo does this.

npm could even do this to bridge the gap: when installing a package called dep, put it at node_modules/dep and create a node_modules/dep.js that just requires dep's main module. That would also then work for AMD loaders (if the modules installed were AMD compatible or converted) if the baseUrl was set to node_modules.

So, it is entirely possibly to stick with the AMD convention and avoid config blocks. Package manager tools have not caught up yet though. Note that this is not the fault of the core AMD module system. This is a package manager issue. And frankly, package managers have been more focused on layouts that make it easy for them vs. what is best for the runtime use and that avoid user configuration. This is the wrong decision to make. Making it easier for users and runtimes does not actually make it that much more complicated for the package manager.

On package managers:

It is important to separate what a package manager provides and what the runtime module system provides. For example, it is possible to distribute AMD-based code in npm. amdefine can help use that code in a node environment. Particularly if the dep.js style of file is written out in the node_modules directory.

I would suggest that npm only be used for node-based code though, to reduce confusion on what can be used where. Particularly given the weaknesses of node's module system for browser use. However, some people like to distribute code targeted for the browser in node because they like npm. So be it.

But also note that a strength for node-based npm use, nested node_modules for package-specific conflicting dependencies, is actually a weakness for browser use: while disk space is cheap for node uses, delivering duplicate versions of code in the browser is really wasteful. Also, there is not a need for compiled C code dependencies in the browser case. So some of npm's capabilities around that are unneccessary.

It would be better to use a different type of package manager for front end code that tried to reuse existing module versions installed, possibly warn the user of diffs, but if really needed, then write out an AMD map config in the case that it is really needed.

In closing:

My biggest complaint is that node explicitly ignored browser concerns when designing itself and choosing its module system, but then some people want to use those design decisions and force them on browser use of modules, where they are not optimal. Just as node did not want to compromise to meet existing browser uses of JS, I do not want to use a less-capable module system in the browser.

I am hopeful that ES modules will have enough plumbing to avoid the function wrapper of AMD. But be aware that ES semantics will much more like AMD's than node's. And the ES module loader API will be robust enough to support config options and AMD loader plugins.

But note: this is not to say that someone cannot use node modules with npm and browserify to make something useful that runs in the browser. Far from it. But it will be restricted to the constraints above. There are still wonderful things that fit in that box, so more power to them for constraining themselves to that box and still shipping something useful. There is just more to browser-based module usage though. And I do not want their unwillingness to address that wider world as a reason to accept less for myself. The good news is that the internet is big enough for both sets of users. So let's all hug and move on to just making things.

(note this is a gist, so I am not notified of comments. I may delete this at some point if I get around to doing something more polished for a blog post, or just do not want to see its rough form any more)

@mikeal
Copy link

mikeal commented Nov 2, 2013

@mreinstein we actually don't "build." Instead we put all the buildy code in the route handler and cache the resource in memory indefinitely.

This means that dev mode is just a flag that turns on source maps and avoids minification. It also sets up a file watcher that flushes the cache on any change. We found that reducing the differences between dev mode and production, as well as removing a step people might forget to run in either dev or deploy, reduced errors and bugs.

@domenic
Copy link

domenic commented Nov 5, 2013

@mikeal to be fair, there is one problem not solvable within Node's authoring format, which @jrburke has emphasized several times. That problem is wanting to load modules cross-origin without CORS. You need a JSONP-like function wrapper for that, which AMD provides and Node's authoring format does not.

(I am actually unclear whether ES6 modules will support CORS-less cross-domain loading; I recall a few confusing discussions but not a conclusion.)

@williamcotton
Copy link

@mikeal what if npm allowed for the ability to specify the local variable names for the dependent modules? so like, instead of having to explicitly:

var someModule = require("someModule");

One could in the package.json file do something like:

localVariables: { someModule: "someModule" }

where it is referencing the thing from the dependencies hash?

That way, some mechanism can choose to write to "CommonJS" style and some other mechanism can write to AMD style.

Current modules would continue to work just fine and authors who are interested in more easily supporting AMD can rely on a built in mechanism.

@csnover
Copy link

csnover commented Nov 6, 2013

The thing that really grinds my gears the more time goes on is that all Node.js really needed to do to be “compatible” with basic AMD modules was to expose a define function that would read the dependencies and consume the factory function. The same node_modules filesystem module lookup system could have been used, the same module cache, the same configuration, and people that wanted to write modules only for Node.js could write them with the less unfortunate syntax they use today. The only practical difference in doing this would be that AMD modules become easier to use within the Node.js ecosystem because they Just Work like any other Node.js module, with the added benefit of also working natively in a browser. The Node people handle dependency conflicts with more specific node_modules directories, the browser people handle dependency conflicts with AMD map, and everyone wins (more or less).

Instead of doing this, libraries like Dojo that try to provide useful, standard, cross-platform code and utilities without requiring any compiler or middleware (since there are, sadly, still many groups that develop primarily or exclusively against e.g. Java backends) are stuck either adding even more dependencies and even more boilerplate to every single module, or having multiple releases (which somehow invariably confuses people), or forcing users to commit to loading their application in Node.js in a totally different way to the way applications normally load in Node.js.

Being one of only a small subset of people that has probably ever experienced a true full-stack, cross-platform JS library, it makes me feel sad that the barrier to adoption is so high that I can’t ever hope to compete with the contemporary Node.js “alternatives” in popularity, in large part because Node.js authors decided against allowing the AMD syntax to be supported as a first-class module format.

Oh well. Come 2020, we’ll probably be able to use ES6 modules everywhere. Probably.

@joesepi
Copy link

joesepi commented Nov 7, 2013

I am presenting on this very topic at CascadiaJS next week so thank you, everyone, for all of this fantastic content. :)

If anyone wants to debate this more in person, I'll be in Vancouver soon and happy to buy a round or two.

Cheers!

@mreinstein
Copy link

That problem is wanting to load modules cross-origin without CORS.

Can't that same library that is already exposed via the web be made available an npm/bower/component/whatever package, and pulled into your project instead of via CORS? To me this exemplifies the differences in philosophy between the 2 projects; RequireJS's goal is to provide a solution to every conceivable way that you might possibly consider declaring dependencies. Browserify doesn't support as many of these cases, but does so without incurring the technical complexity/debt.

I don't pretend to speak for everyone. For me (and probably some others,) I'm not willing to make that tradeoff. For example, another edge case that was brought up earlier: Could I design my apps to use something like require(prefix + "/som_module"); ? Of course I could. I suspect that if I saw this kind of thing in someone else's code that I'd have to maintain, I would not be happy, trying to track down how prefix gets assigned. And the fact that it can be assigned in many convoluted ways ensures that some idiot will.

@mikeal
Copy link

mikeal commented Nov 7, 2013

@csnover it is not "the node way" to add things to core which can be accomplished by modules. Tools like browserify support AMD, they live in the ecosystem.

Being one of only a small subset of people that has probably ever experienced a true full-stack, cross-platform JS library

You are not part of a small subset. My company is full stack JS, most of our libraries run in node.js and in the client. Everything @substack and @maxogden do is the same. The entire voxeljs ecosystem is built on dual purpose libraries including all the ndarray modules.

You are only a minority in that you are doing this in a smaller ecosystem. Most of the node community is writing things that work in both places, most of the modules in npm work with browserify. This is the world we live in, you might consider joining us :)

@dylans
Copy link

dylans commented Nov 7, 2013

@mikeal, I don't like answers that essentially say "our ecosystem is big, you should suck it and do things our way, even if they're not ideal". It's a cop out, and basically saying that because it's a problem you don't care about, no one else should care. Claiming that you're right, because you're popular, isn't useful. What is your actual technical merit for arguing against what @cnsover has said? Frankly, the JavaScript module loading ecosystem existed far before Node.js and npm. That you choose to ignore Dojo and the painful lessons we learned, and later AMD, is of course your choice, but stop insulting us by saying we're a minority. Browserify does not solve the problems we need to solve every day for our users.

@csnover
Copy link

csnover commented Nov 7, 2013

Hi @mikeal,

it is not "the node way" to add things to core which can be accomplished by modules

In my response I think I was talking about “the node way”, where Node.js intentionally decides not to make any simple changes to core (and in fact, actually made such changes once but then reverted) to improve compatibility and how it negatively impacts users. You are welcome to look at it as a more positive thing, but as an AMD user, this is the perspective I have. I like Node.js, but it’s very frustrating to me because of this singular problem.

Maybe I also do not understand what “core” Node.js is, but it seems to me to include many things that could be provided by modules—crypto, ssl, http, url, punycode, and on and on. Include the fact that you can add C++ module extensions to V8, and Node.js “core” would pretty much be nothing but V8 itself if it followed “the node way”. So I obviously do not understand this perspective, except insofar that it maybe helps provide a convenient excuse to not be compatible with an inferior (but necessary for async platforms) module format.

You are not part of a small subset. My company is full stack JS, most of our libraries run in node.js and in the client.

Relative to the number of JavaScript developers there are out there, it is practically a rounding error. Node.js developers are themselves a fairly small subset of the JavaScript ecosystem.

This is the world we live in, you might consider joining us :)

A world in which responsibility to pragmatically improve cross-platform compatibility is abdicated not really a world I want to live in. Thank you for the offer, though.

Copy link

ghost commented Nov 9, 2013

@domenic ES6 modules will support cross-domain loading without CORS. Scripts loaded in this manner won't go through all of the Loader callbacks, since that would expose their source. But they'll at least still load.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment