Doing this as a gist, because I don't have time for a polished post, I apologize:
If you had to define an optimal module system that had to work with async, networked file IO (the browser) what would that look like? It would not be node's system, for the following reasons:
- the
require('')
in node is very imperative. It can be called at any time and mean a module should be synchronously loaded and evaluated when it is encountered. How would this work if this call is encountered in the browser case:require(someVariable + '/other')
? How would you know what module to include in any bundle? For those cases, you should allow for an async require to fetch, and leave dependencies that can be bundled to therequire('StringLiteral')
format. Node has no allowance for this, and browserify will not do a good job with those types of dependencies. It will need hints, or explicit out-of-module listing of what to include.
Once you have an MVC system in the browser, it will be common to delay loading of a view and its controller until it is needed, and rely on routing via URLs or button actions to know what module to imperatively ask for next. This works well in AMD since it has a callback-style require for this. This really helps performance: the fastest JS is the JS you do not load. This is a very common practice in use with AMD systems. browserify cannot support it natively. The suggestion when using browserify is to choose your own browser script loader and figure out the loading yourself.
- Similarly, since there is no synchronous module fetching and execution in the browser (at least it would be madness to want to do that), you cannot reliably preload a modifier to the module system so that it takes effect for the rest of the module loading. Example in node is requiring coffeescript in your top level app so that any .cs files are considered in module loading.
Instead, AMD has loader plugins that allow you to specify the type of resource that is being requested. This is much more robust, and leads to better, explicit statements of dependencies. Plus, loader plugins can participate in builds. This is better than the browserify plugins for this type of thing because of the imperative nature of JS code: fs.readFile(someVariable + 'something.txt')
.
Static declaration of dependencies is better for builds. So if you like builds, favor a system that enforces static declaration, and has an async form for non-declarative module uses. ES modules will go this route, and explicitly not use node's module system because of this imperative dependency issue that node has.
The parts that are ugly about AMD:
- It explicitly asks for a function wrapper. Node actually adds one underneath the covers when it loads modules. browserify does too when it combines modules. While it would be nice to not have the function wrapper in source form, it has these advantages:
a) Using code across domains is much easier: no need to set up CORS or restrict usage to CORS enabled browsers. You would be amazed by how many people would have trouble with this as it is a hidden, secondary requirment. So, they are developing just fine on their local box, do a production deployment, then things don't work. This is confusing and not obvious as to why it fails. You may be fine with knowing what to do for this, but the general population still has trouble with this.
b) Avoids the need to use eval(). Eval support in the browser is uneven traditionally (scope differences), and now with CSP makes it even more of a hazard to use. I know there has been at least once case of a "smart" proxies that would try to "protect" a user by stripping out eval statements in JS.
In short, the function wrapper is not the ideal, but it avoids hard to trace secondary errors, and it really is not that much more typing. Use the sugar form if you want something that looks like commonjs/node.
- Possibility for configuration blocks: front end developers have much more varied expectations on project layout. Node users do not. This is not the fault of requirejs or other AMD loaders though. At the very least, supporting a configuration block allows more people to participate in the benefits of modular code.
However, there is a convention in AMD loaders of baseUrl + module ID + '.js'
. If a package manager lays out code like this, then there is no config block needed in an AMD loader. volo does this.
npm could even do this to bridge the gap: when installing a package called dep
, put it at node_modules/dep
and create a node_modules/dep.js
that just requires dep
's main module. That would also then work for AMD loaders (if the modules installed were AMD compatible or converted) if the baseUrl was set to node_modules
.
So, it is entirely possibly to stick with the AMD convention and avoid config blocks. Package manager tools have not caught up yet though. Note that this is not the fault of the core AMD module system. This is a package manager issue. And frankly, package managers have been more focused on layouts that make it easy for them vs. what is best for the runtime use and that avoid user configuration. This is the wrong decision to make. Making it easier for users and runtimes does not actually make it that much more complicated for the package manager.
On package managers:
It is important to separate what a package manager provides and what the runtime module system provides. For example, it is possible to distribute AMD-based code in npm. amdefine can help use that code in a node environment. Particularly if the dep.js
style of file is written out in the node_modules directory.
I would suggest that npm only be used for node-based code though, to reduce confusion on what can be used where. Particularly given the weaknesses of node's module system for browser use. However, some people like to distribute code targeted for the browser in node because they like npm. So be it.
But also note that a strength for node-based npm use, nested node_modules for package-specific conflicting dependencies, is actually a weakness for browser use: while disk space is cheap for node uses, delivering duplicate versions of code in the browser is really wasteful. Also, there is not a need for compiled C code dependencies in the browser case. So some of npm's capabilities around that are unneccessary.
It would be better to use a different type of package manager for front end code that tried to reuse existing module versions installed, possibly warn the user of diffs, but if really needed, then write out an AMD map config in the case that it is really needed.
In closing:
My biggest complaint is that node explicitly ignored browser concerns when designing itself and choosing its module system, but then some people want to use those design decisions and force them on browser use of modules, where they are not optimal. Just as node did not want to compromise to meet existing browser uses of JS, I do not want to use a less-capable module system in the browser.
I am hopeful that ES modules will have enough plumbing to avoid the function wrapper of AMD. But be aware that ES semantics will much more like AMD's than node's. And the ES module loader API will be robust enough to support config options and AMD loader plugins.
But note: this is not to say that someone cannot use node modules with npm and browserify to make something useful that runs in the browser. Far from it. But it will be restricted to the constraints above. There are still wonderful things that fit in that box, so more power to them for constraining themselves to that box and still shipping something useful. There is just more to browser-based module usage though. And I do not want their unwillingness to address that wider world as a reason to accept less for myself. The good news is that the internet is big enough for both sets of users. So let's all hug and move on to just making things.
(note this is a gist, so I am not notified of comments. I may delete this at some point if I get around to doing something more polished for a blog post, or just do not want to see its rough form any more)
@maxogden:
I would phrase this as "if you want to use npm and node's style of module declarations, and package up some code to deliver to the browser, then browserify is a great option. If you need dynamic loading, or have other requirements, requirejs is a great option".
Assuming monolothic libraries are only used with requirejs, or that it cannot take small bundles of modules and deal with builds, are not correct assessments.
I can see where you may have made that classification by looking at some projects that have used requirejs. Traditional browser JS libraries may have packaged up a bunch of functionality for ease of distribution, but hopefully the blossoming of more JS package managers shows there are options now. And respect to npm for being a great example of a package manager. But requirejs is fine using lots of small modules, and there are build tools for the ecosystem too. Just small differences in code layout conventions and module style.
Great work on voxeljs and the open data initiatives, btw.
@dominictarr:
This will be common with MVC systems that want to load the next views on demand. The goal is to get fast startup times, and the fasted JS is the JS that does not get downloaded or executed until needed. There are web IDEs like Brackets and Ace that are bigger user experiences, so expect to find dynamic loading in them.
At the moment, I work on an email app for Firefox OS. Even though all the HTML/JS/CSS are local, multiple IO operations on the device can still be slow enough that some bundling for upfront assets are useful, but due to memory and CPU constraints, not all the cards for the UI should be loaded up front. While the code is not the prettiest -- my work for this app is a part of a morphing of older code that was not as modular, and due to diff reviews and other priorities has to go in phases, but some links that might be helpful:
I expect you will find a similar pattern with any MVC-backed app that uses requirejs. They may use routes or view/controller triggers to dynamically load the next bits.
I see your name pop up quite a bit in the node community, nice to converse with you online.
@bclinkinbeard:
I feel like we may have commented on an issue together, or read something similar from you before on the internet? In any case, your handle looks familiar, hello! To your question:
It is very common to see builds with AMD systems, but also dynamic loads of sections on demand. The point about not needing a build system to use the module system is to highlight that the all the front end module use cases are solved correctly. Dynamic loading of the next view in a web app being a very common use case.
So feel free to always build if that is what you prefer. Hopefully the above responses also help highlight the non-build, or non-single file build, use cases.
@mikeal:
Agreed. Any comment I made in this area is to focus the responsibility on the package manager to help the user with the issue. npm does not do this by default, it requires a separate dedupe module for the user to run, as I understand it. I would prefer a front end package manager to incorporate the conflict resolution and deduping up front, particularly if it means it needs the user to sort something out that it cannot do itself. But I am sure there are people that are fine with npm's behavior. While it is perhaps more steps than I would prefer, they can certainly get it to work.
Thanks for pointing out a weak phrasing on my part. "Ignored" is was too strong a word. "Discarded" is probably the better term. Because of what was discarded though, the CommonJS-derived system has weaknesses when used in the browser. It does not mean it is totally useless in the browser, but does have weaknesses.
As @donspaulding mentions, your developer breakdown does not feel right. I think you may be attaching too much weight to where I came from, or the the others involved with the AMD effort, primarily from the Dojo community, and the perception around that toolkit.
The choice for the function wrapper, to avoid the eval and CORS issues was to precisely make it easier for all the amateurs. They are the ones that would get bitten by the hidden costs of trying to use a CommonJS system (or even the old Dojo system) directly in the browser.
Reducing support requests was a factor in the AMD design, and the choices were made because Dojo's older system was too clever, and we saw the the CommonJS path taking a similar route. The main point though is AMD was driven by real use cases, purposely made more literal, and then evolved based on actual use and broad experience in the field.
Node did not have the same use cases though, and it got to choose its package management system and file layout conventions fairly early, so that really helps remove a lot of choices for developers. That is a good thing. More people using the same convention that is enforced by the environment.
Retrofitting modular use in the browser is harder given different communities that work in it have already established conventions. So it is closer to a bazaar than to node's cathedral (a very small cathedral that is community-built and allows for community-attached bazaars -- and I love to visit it when doing command line tools and servers).
I am hopeful though as more front end communities, and their package managers, get more used to the
baseUrl + moduleID + '.js'
convention for code layout, it can get a lot better for the front end, and I believe AMD loaders have really helped make the case for modular front end code. It can be a perception change from the old script based system though, so naturally there is friction when learning new ways to do things.Contemporary server/command line systems, yes. Front end web development has many continents to it though, and many that are can be hard to measure given the wild west of its past. The popularity or growth of node's continent does not mean it provides adequate tooling for other continents. Luckily the internet is big enough for all the continents. It is not a zero-sum game.
If node/npm wanted to expand better to the front end continents, I can see where there could be some small adjustments or options to how npm lays out files, and there is nothing preventing AMD modules being distributed in npm, they can work via an adapter when used directly in node.
However, I would not strongly advocate for that route, because node and npm have done well to choose to limit itself to its immediate concerns. Just do not expect those choices to work well on other JS continents, and expect others to point out the weaknesses when used in other environments, and build other systems.
And as mentioned before, node+npm+browserify still solve browser coding issues for some people. For me, it is the same class of browser use cases that the Rails asset pipeline solves, so therefore not really matching my needs. But I completely understand it is enough for some people. And at least it uses JavaScript!
BTW, I really appreciate your efforts around JS community conferences. Not a skill I have, but community is very important, and it is great to see people putting energy into small, respectful conferences.