Skip to content

Instantly share code, notes, and snippets.

@jrburke
Last active December 26, 2015 20:19
Show Gist options
  • Star 33 You must be signed in to star a gist
  • Fork 3 You must be signed in to fork a gist
  • Save jrburke/0479f25edfc6bb043ddb to your computer and use it in GitHub Desktop.
Save jrburke/0479f25edfc6bb043ddb to your computer and use it in GitHub Desktop.
Again with the modules

Doing this as a gist, because I don't have time for a polished post, I apologize:

If you had to define an optimal module system that had to work with async, networked file IO (the browser) what would that look like? It would not be node's system, for the following reasons:

  1. the require('') in node is very imperative. It can be called at any time and mean a module should be synchronously loaded and evaluated when it is encountered. How would this work if this call is encountered in the browser case: require(someVariable + '/other')? How would you know what module to include in any bundle? For those cases, you should allow for an async require to fetch, and leave dependencies that can be bundled to the require('StringLiteral') format. Node has no allowance for this, and browserify will not do a good job with those types of dependencies. It will need hints, or explicit out-of-module listing of what to include.

Once you have an MVC system in the browser, it will be common to delay loading of a view and its controller until it is needed, and rely on routing via URLs or button actions to know what module to imperatively ask for next. This works well in AMD since it has a callback-style require for this. This really helps performance: the fastest JS is the JS you do not load. This is a very common practice in use with AMD systems. browserify cannot support it natively. The suggestion when using browserify is to choose your own browser script loader and figure out the loading yourself.

  1. Similarly, since there is no synchronous module fetching and execution in the browser (at least it would be madness to want to do that), you cannot reliably preload a modifier to the module system so that it takes effect for the rest of the module loading. Example in node is requiring coffeescript in your top level app so that any .cs files are considered in module loading.

Instead, AMD has loader plugins that allow you to specify the type of resource that is being requested. This is much more robust, and leads to better, explicit statements of dependencies. Plus, loader plugins can participate in builds. This is better than the browserify plugins for this type of thing because of the imperative nature of JS code: fs.readFile(someVariable + 'something.txt').

Static declaration of dependencies is better for builds. So if you like builds, favor a system that enforces static declaration, and has an async form for non-declarative module uses. ES modules will go this route, and explicitly not use node's module system because of this imperative dependency issue that node has.

The parts that are ugly about AMD:

  1. It explicitly asks for a function wrapper. Node actually adds one underneath the covers when it loads modules. browserify does too when it combines modules. While it would be nice to not have the function wrapper in source form, it has these advantages:

a) Using code across domains is much easier: no need to set up CORS or restrict usage to CORS enabled browsers. You would be amazed by how many people would have trouble with this as it is a hidden, secondary requirment. So, they are developing just fine on their local box, do a production deployment, then things don't work. This is confusing and not obvious as to why it fails. You may be fine with knowing what to do for this, but the general population still has trouble with this.

b) Avoids the need to use eval(). Eval support in the browser is uneven traditionally (scope differences), and now with CSP makes it even more of a hazard to use. I know there has been at least once case of a "smart" proxies that would try to "protect" a user by stripping out eval statements in JS.

In short, the function wrapper is not the ideal, but it avoids hard to trace secondary errors, and it really is not that much more typing. Use the sugar form if you want something that looks like commonjs/node.

  1. Possibility for configuration blocks: front end developers have much more varied expectations on project layout. Node users do not. This is not the fault of requirejs or other AMD loaders though. At the very least, supporting a configuration block allows more people to participate in the benefits of modular code.

However, there is a convention in AMD loaders of baseUrl + module ID + '.js'. If a package manager lays out code like this, then there is no config block needed in an AMD loader. volo does this.

npm could even do this to bridge the gap: when installing a package called dep, put it at node_modules/dep and create a node_modules/dep.js that just requires dep's main module. That would also then work for AMD loaders (if the modules installed were AMD compatible or converted) if the baseUrl was set to node_modules.

So, it is entirely possibly to stick with the AMD convention and avoid config blocks. Package manager tools have not caught up yet though. Note that this is not the fault of the core AMD module system. This is a package manager issue. And frankly, package managers have been more focused on layouts that make it easy for them vs. what is best for the runtime use and that avoid user configuration. This is the wrong decision to make. Making it easier for users and runtimes does not actually make it that much more complicated for the package manager.

On package managers:

It is important to separate what a package manager provides and what the runtime module system provides. For example, it is possible to distribute AMD-based code in npm. amdefine can help use that code in a node environment. Particularly if the dep.js style of file is written out in the node_modules directory.

I would suggest that npm only be used for node-based code though, to reduce confusion on what can be used where. Particularly given the weaknesses of node's module system for browser use. However, some people like to distribute code targeted for the browser in node because they like npm. So be it.

But also note that a strength for node-based npm use, nested node_modules for package-specific conflicting dependencies, is actually a weakness for browser use: while disk space is cheap for node uses, delivering duplicate versions of code in the browser is really wasteful. Also, there is not a need for compiled C code dependencies in the browser case. So some of npm's capabilities around that are unneccessary.

It would be better to use a different type of package manager for front end code that tried to reuse existing module versions installed, possibly warn the user of diffs, but if really needed, then write out an AMD map config in the case that it is really needed.

In closing:

My biggest complaint is that node explicitly ignored browser concerns when designing itself and choosing its module system, but then some people want to use those design decisions and force them on browser use of modules, where they are not optimal. Just as node did not want to compromise to meet existing browser uses of JS, I do not want to use a less-capable module system in the browser.

I am hopeful that ES modules will have enough plumbing to avoid the function wrapper of AMD. But be aware that ES semantics will much more like AMD's than node's. And the ES module loader API will be robust enough to support config options and AMD loader plugins.

But note: this is not to say that someone cannot use node modules with npm and browserify to make something useful that runs in the browser. Far from it. But it will be restricted to the constraints above. There are still wonderful things that fit in that box, so more power to them for constraining themselves to that box and still shipping something useful. There is just more to browser-based module usage though. And I do not want their unwillingness to address that wider world as a reason to accept less for myself. The good news is that the internet is big enough for both sets of users. So let's all hug and move on to just making things.

(note this is a gist, so I am not notified of comments. I may delete this at some point if I get around to doing something more polished for a blog post, or just do not want to see its rough form any more)

@jostsg
Copy link

jostsg commented Nov 1, 2013

It would be better to use a different type of package manager for front end code that tried to reuse existing module versions installed, possibly warn the user of diffs

Bower does what you ask for. It also allows you to force the latest version, for better or worse.

I believe you are right, npm is not working for the browser. That's why I usually add a bower.json as well as a package.json to any module that is supposed to be used in both environments. This way one gets a warning when there is a version conflict and it is possible to provide different main files if necessary. With the bower list command it is also pretty easy to concatenate all the files for the client.

On the subject: I can't see how these two are competing technologies. If we are talking npm, it is just a package manager, and require.js, at least to me, serves as a module loader.
Then there is Node.js, which comes with a module loader, that embraces a common API for modules, while the browser doesn't have one at all (yet).

If I needed to describe my ideal module system right now, it would contain...

  • a module loader for each environment
  • one module API
  • a thing that helps me fetch my dependencies

Though, again, I prefer a module fetch thing for each environment, Node.js and the browser

I imagine, this can be accomplished by throwing bower, npm, commonjs and require.js into the mix. It involves some build steps though.
For module loading in the browser, I am also a big fan of the idea the Google guys presented at last year's jsconf.eu.

My thoughts take into consideration, that people want to be able to use modules in both environments. The ideal module system might look different, if one works in one environment exclusively. For Node.js that would be pretty obvious.

@bclinkinbeard
Copy link

@SlexAxton and @jrburke (I think we chatted on Twitter once or twice), thanks for your follow up. I think you both made lots of reasonable points. I still prefer the Browserify approach, but if I am being honest it is largely that: a preference. Obviously, both AMD and Browserify can lead to failure or success, and the defining factor is the skill and care of the developers behind the project.

Something I encountered during my time with AMD, which I think may contribute to the perception that it's associated with monolithic libraries, is that very few libraries seem to properly support the format. You therefore end up having to use the config to wrap a lot of things that are defined on window, which I know think hope we can all agree is a bad thing (attaching things to window, that is). I (like to) think that the CommonJS format encourages people to write better code, or at the very least prevents them from polluting the global scope.

I also don't really buy the complaint that using Browserify forces you to run Node. I mean, it does of course, but are you really not already using it for Grunt and the like? I can't imagine doing modern web development without the arsenal of tools we have today, and almost all of them run on top of Node.

Lastly, just to provide an anecdotal contradiction to @SlexAxton's projection, we are using Browserify on my current project with great success. We are building a SPA using AngularJS, jQuery and D3 that is very heavy on data visualization, and talks to a Java back end running on AWS with PostgreSQL. I don't know the exact number of JS modules or LOC, but I would ballpark the modules around 200. We build Angular, jQuery, D3, underscore, etc. into their own libs.js bundle, which is 102 kb gzipped. Our app.js bundle is 35 kb gzipped. Not incredibly tiny, but it runs very well on everything from a first generation iPad mini to desktop browsers.

@williamcotton
Copy link

@substack is your improv comedy troupe stuck in the 70s as well?

I kid, but if you want to meet up over a beer and talk about a bunch of shit like this sometime, I'd love to! I live in SF and I'm not even afraid of visiting big bad old Oakland.

I'm serious, let me know!

@mikeal
Copy link

mikeal commented Nov 2, 2013

I'm starting to see people talking past each other and I think I know why.

Commonly used browser libraries have, traditionaly, been rather large. There's a lot of reasons for that, but we can admit that the average library was significantly larger than a typical node module.

Node modules are very small, obviously.

The likelihood of a version change breaking dependencies or applications is directly proportional to the size of that library.

When a node module's minor version changes the assumption is that you should make sure everything works before depending on a new one because the likelyhood of that change effecting your code is much larger than a minor release of jQuery which people update all the time without even checking.

If your goal is to grow an ecosystem as large as node's then you'll need to assume that modules are smaller and that changes are more likely to break.

@mikeal
Copy link

mikeal commented Nov 2, 2013

appreciate that you were around when this stuff was going down, but this just isn't true. I can't count the number of times people talk about how they didn't have to worry about the constraints of the browser. The fact that browserify didn't come until years later from someone outside of this original group only makes that picture more clear.

You misunderstand what node's module is and when it was formed.

The earliest versions of node took their module system directly from CommonJS. Yes, CommonJS was created outside the constraints of the browser primarily by the developers of Narwhal.

Even during the earliest days of npm @isaacs was working with @ry to tweak the module system. In fact, the way @isaacs ended up becoming one of the earliest "committers" was by taking over responsibility for node's module system and by late 2010 npm and the module system were being developed in tandem.

Browserify 0.0.1 has a version target of 0.2 which means it was created before the module system truly became "node's". @substack was also involved in the changes in the module system, mostly re-actively since most changes rippled through browserify.

Most of the complaints I see here, especially those relating to versioning and localization of deps are all decisions made long after browserify was created and are unique to node and in some cases even cause spec incompatibilities with CommonJS. In fact, without browserify I think some things may have been very different but browserify was showing that browser packages could be built from node's module system without changes.

So yes, the browser was considered. Remember that during this time @isaacs and I were still in CommonJS while they went down the AMD rabbit hole and the first version of the npm registry was even described as a standard for CommonJS around the same time people were arguing about which AMD spec to go with.

Node's module was free not to ignore the browser or to focus on being a systems module system, Node's module system was free to focus on being the best module system for modules.

That is why it has succeeded, the first and most important customer of a module is modules and node continues to be the easiest way to write, publish, and consume modules. Every other consideration is secondary.

You keep showing that graph. I don't think it means what you think it means. That node is popular and has a module system does not automatically make that module system best, and it especially doesn't make it best for the browser.

If I may paint my own projection: people who run node.js full-stack and completely buy into the node and npm ecosystem and also probably usually don't have that much to do on the front-end can and should use npm/browserify for their front-end module/build system, for anyone who needs to write a significantly complex application that does not need to rely on a compatible backend, there's AMD. AMD is for the web.

I have a lot more graphs now, and a lot more data :) I've been ripping data out of GitHub for a while and it says a lot more than just the module counts do.

The first thing it shows is that there really isn't a node ecosystem and a browser ecosystem. A huge portion of the engagement is in browser tools and modules and the people stretch between between different parts of the ecosystem.

Once you realize that node, on a community level, is not separate from the web but that it is of the web and has become a vertebra in the spine of web development these arguments become very very moot.

What is the best module system is subjective, node's is the fastest growing, not just in comparison to other JS module systems but in comparison to all module systems. Adoption matters.

We aren't fighting for what system people use to package modules in to their apps, and RequireJS certainly has more features to offer here than browserify, we're fighting for how people define their modules. In that, node's module system is winning, and all browser tools would benefit from adopting support for modules built in it. For its part browserify will consume a module written to any modern standard.

@mikeal
Copy link

mikeal commented Nov 2, 2013

@JBurke

Thanks for the kind words :)

I mentioned this in my reply to Alex but I don't believe there are continents of "frontend" and "backend" I believe we are instead awash in the sea of the web.

Compatibility breeds growth. The more compatibility you can ensure between your parts the more you'll grow new parts. IMO node already won whatever fight we had over how modules get defined. I won't begrudge people who choose to do otherwise but any tool that is built for the benefit of any web developer should support modules written in the fastest growing format in addition to whatever other format it wishes to use in order to extend their functionality.

@Matt-Esch
Copy link

The jQuery argument is very difficult to follow. In many senses jQuery is a large, framework-like choice. It is the equivalent of an environment. Small modules shouldn't really require jQuery at a version as a dependency. You either write a jQuery plugin or you export a function that is passed jQuery as an argument. Fishing the jQuery token out of the browser global scope and passing into the module is even far better than requiring jQuery anywhere but at the top level of your application (not that I would recommend either). Please extend this argument to any other monolithic environment that may cause file size issues ;)

I also agree with @dominictarr in that freedom to refactor at your leisure is an important feature if you are going to encourage developers to refactor at all (that and a decent test suite).

But I digress. Having used browserify extensively in production, it pays to keep a track of your dependency tree simply because the optimization problem is something that you have to take charge of. I think this discussion is primarily focused on browser optimization and the freedom to make choices about these optimizations.

As far as I am concerned, a require statement is communication of intent, and require('module') is a pretty low complexity solution. It doesn't state how that thing is downloaded/bundled under the hood. It's not designed to optimize for the browser in the way you would like it to under certain conditions, but it's really out of the scope of the module system to provide this for you. If you wish to criticize browserify on implementation detail, it may be more fruitful to record these problems as github issues so they can be addressed.

Optimization is simply about hand crafting under particular use cases. It's one of the most challenging and rewarding aspects of browser development imo. It really is worth addressing anything that compromises your freedom to optimize the code in the way that you would like.

@mreinstein
Copy link

Was anyone else curious about the size of the animated gif posted earlier in this thread? It's 1.7MB. Think about that for a moment. :)

The initial draw to requirejs for me was the idea of lazy loading. I learned several lessons while building a half-dozen websites in requirejs.

  • defering loading over http has a cost you don't hear a lot about; having to stop mid UX and wait for your bytes to load before the app can continue. I'm sure with a large enough codebase, the up-front cost of loading the whole pile of javascript overwhelms the lazy loading, but with a combined/minified/gzipped bundle, that takes a pretty dang large amount of code. The time spent with multiple network requests in-flight for part of your codebase can easily overwhelm just getting the whole thing up front.
  • there's significant complexity in the requirejs config and runtime. I mean c'mon, the fact that the sanctioned optimization is to replace require.js with almond.js or something lighter at run time is a pretty clear admission of this by the authors, IMO.
  • It became really challenging to do some of the more advanced google and yahoo page speed optimization recommendations. For example, let's say you want to rename your modules based on an md5 hash of their contents, and have that still work with what r.js produces. This is annoyingly complex.
  • when I first started using modules I thought of browsers and node.js as living in different worlds. I think this is a leftover from working with old school traditional page request sites for a decade. But that line is blurring and disappearing. There's not a lot of code that won't run in both.
  • Whether you're using less, or sass, or coffeescript, or minification, or uploading to a cdn, or running unit tests, or optimizing your images. Those are all build steps. Any time you need to transform your assets in some way, you have a build step. Things get so much better when you can accept that and embrace it. I'd wager that part of grunt's popularity is a wide-scale realization of this fact.
  • When you realize you've had build steps all along, building in dev isn't a scary thing to avoid anymore. In fact you embrace it because your dev/staging/prod environments are now even more similar. And you get nice things like live reload for free, and other activities that you want to tack onto your change hooks.

The end result was a switch to browserify, because the tradeoff in simplicity, improved site responsiveness, and ease of doing more advanced deploy optimizations went up. It's no coincidence that end users that know nothing about code were remarking how much snappier the site was. Before and after comparisons on google page speed tests were bumping the scores from 70-80% to as high as 98-100% in many cases too.

@jrburke
Copy link
Author

jrburke commented Nov 2, 2013

@mikeal

I mentioned this in my reply to Alex but I don't believe there are continents of "frontend" and "backend" I believe we are instead awash in the sea of the web.

This would be much truer if the weaknesses mentioned in the gist were addressed. For instance, I expect ES modules, since they do have allowances for those issues, to bridge the continents better.

Compatibility breeds growth. The more compatibility you can ensure between your parts the more you'll grow new parts. IMO node already won whatever fight we had over how modules get defined.

By solving the problems mentioned in this gist, and using the same module system in the browser and in other JS envs, that will give even greater compatibility and greater growth.

I know it may be difficult for you to see given your heavy involvement in node and its ecosystem, but AMD usage is very strong. For me, this proves out that these problems are real. I am sure you can make a case for AMD usage as not being as visible as what you see in node, but then node does not try to solve the same problems either.

You can mitigate those problems by using more tools and transforms. For me, that complicates the solution.

ES modules, however they turn out, will not be be a direct copy on Node's system, and will specifically address the weaknesses mentioned above. Because they are real problems. In that respect, node did not win the fight. It certainly has some great usage patterns to share though.

I won't begrudge people who choose to do otherwise but any tool that is built for the benefit of any web developer should support modules written in the fastest growing format in addition to whatever other format it wishes to use in order to extend their functionality.

The problem is that Node's module system cannot actually express equivalent module desires. There is a lot of overlap, but this gist is about where it falls short.

If you or other people in the node community get frustrated when asked about AMD use in node, here is a short, diplomatic answer that lets you move on to other conversations that you would rather have:

"No plans to change Node's module system. Wait for ES modules if you want capabilities outside of Node's system. AMD modules could be used in Node, but it requires userland modules and adapters to work. You will likely encounter less friction with other Node module usage if you stick to the basic Node module system. If you want to distribute code targeted for the browser using NPM, NPM lays out code according to Node's needs. You will need to use other tools on top of it to convert the code to a form or layout usable in the browser."

@mikeal
Copy link

mikeal commented Nov 2, 2013

This would be much truer if the weaknesses mentioned in the gist were addressed.

If a few use cases and bugs would make or break this kind of thing, we wouldn't be using the web at all :)

ES modules, however they turn out, will not be be a direct copy on Node's system, and will specifically address the weaknesses mentioned above. Because they are real problems. In that respect, node did not win the fight. It certainly has some great usage patterns to share though.

ES modules will support node modules. It will support them directly as a result of node's tremendous growth and until that growth was apparent ES modules had not intended to do so and early drafts were incompatible.

Again, adoption matters. As a module format node modules are the dominant pattern.

In the future there will likely be more loaders of node modules, including ones that work w/ ES6's module loader, but that doesn't mean that ES Modules will grow as large as node as the format used by module authors. In addition, tools like browserify will likely support modules written to the new ES Modules format in addition to their support for node's native module format.

If you or other people in the node community get frustrated when asked about AMD use in node, here is a short, diplomatic answer that lets you move on to other conversations that you would rather have:

We don't really need an answer for that because this isn't really an issue. People complain about lots of things in node but the module system is rarely one of them. Anyone asking that node switch or make breaking changes to the native module format would be quickly dismissed, the size of the existing ecosystem is far too large to break compatibility at this point. Many ideas of varying quality are dismissed quickly nowadays because they would break compatibility which is no longer acceptable.

AMD support was asked for, and added, in tools like browserify.

I don't know why you continue to insist that these problems must be addressed by the module format. They have been addressed in tools and loaders built in node, quite a few of them actually. Now, you may prefer your solution and the one you have built may address many of these issues together in an alternative module format in addition to your tooling that consumes it. However, building things together than can be build apart is not the node way. There are modules that fix many of the issues you raise. So long as those tools exist in the ecosystem this is not a problem for core and thus not an issue to be resolved by altering the module format.

In terms of duplicate module loading and version handling this is actually handled by npm, which is a module, and you could write an alternative to it that still supported the module format used by everything in the npm registry.

All of the issues you have are solvable on top of the node ecosystem, I know that isn't how you've chosen to solve them but claiming that node must change its module format in order to come to the same level of support seems absurd.

I don't agree that the issues you are bringing up are as widely held as a blocker as you do. They are certainly valid cases and are problems that people should pick a solution for, of which there are many. In having such a vibrant ecosystem I've come to expect that any problem will be resolved by someone relative to its importance. Sometimes that person is me, most of the time it is not.

File size is certainly an issue, many ways to solve it, including removing duplicate deps of differing versions. I work on a very large app, this is a real concern for us, but when I add a few deps i run dedupe and disc and I resolve this kind of thing. It's hardly worth it for me to abandon the dominant pattern and ecosystem to avoid running a few commands and poking at a visualization, nor is it rational for me to suggest that node should alter its module format and npm's versioning strategy so that I might avoid doing so.

@mikeal
Copy link

mikeal commented Nov 2, 2013

@mreinstein we actually don't "build." Instead we put all the buildy code in the route handler and cache the resource in memory indefinitely.

This means that dev mode is just a flag that turns on source maps and avoids minification. It also sets up a file watcher that flushes the cache on any change. We found that reducing the differences between dev mode and production, as well as removing a step people might forget to run in either dev or deploy, reduced errors and bugs.

@domenic
Copy link

domenic commented Nov 5, 2013

@mikeal to be fair, there is one problem not solvable within Node's authoring format, which @jrburke has emphasized several times. That problem is wanting to load modules cross-origin without CORS. You need a JSONP-like function wrapper for that, which AMD provides and Node's authoring format does not.

(I am actually unclear whether ES6 modules will support CORS-less cross-domain loading; I recall a few confusing discussions but not a conclusion.)

@williamcotton
Copy link

@mikeal what if npm allowed for the ability to specify the local variable names for the dependent modules? so like, instead of having to explicitly:

var someModule = require("someModule");

One could in the package.json file do something like:

localVariables: { someModule: "someModule" }

where it is referencing the thing from the dependencies hash?

That way, some mechanism can choose to write to "CommonJS" style and some other mechanism can write to AMD style.

Current modules would continue to work just fine and authors who are interested in more easily supporting AMD can rely on a built in mechanism.

@csnover
Copy link

csnover commented Nov 6, 2013

The thing that really grinds my gears the more time goes on is that all Node.js really needed to do to be “compatible” with basic AMD modules was to expose a define function that would read the dependencies and consume the factory function. The same node_modules filesystem module lookup system could have been used, the same module cache, the same configuration, and people that wanted to write modules only for Node.js could write them with the less unfortunate syntax they use today. The only practical difference in doing this would be that AMD modules become easier to use within the Node.js ecosystem because they Just Work like any other Node.js module, with the added benefit of also working natively in a browser. The Node people handle dependency conflicts with more specific node_modules directories, the browser people handle dependency conflicts with AMD map, and everyone wins (more or less).

Instead of doing this, libraries like Dojo that try to provide useful, standard, cross-platform code and utilities without requiring any compiler or middleware (since there are, sadly, still many groups that develop primarily or exclusively against e.g. Java backends) are stuck either adding even more dependencies and even more boilerplate to every single module, or having multiple releases (which somehow invariably confuses people), or forcing users to commit to loading their application in Node.js in a totally different way to the way applications normally load in Node.js.

Being one of only a small subset of people that has probably ever experienced a true full-stack, cross-platform JS library, it makes me feel sad that the barrier to adoption is so high that I can’t ever hope to compete with the contemporary Node.js “alternatives” in popularity, in large part because Node.js authors decided against allowing the AMD syntax to be supported as a first-class module format.

Oh well. Come 2020, we’ll probably be able to use ES6 modules everywhere. Probably.

@joesepi
Copy link

joesepi commented Nov 7, 2013

I am presenting on this very topic at CascadiaJS next week so thank you, everyone, for all of this fantastic content. :)

If anyone wants to debate this more in person, I'll be in Vancouver soon and happy to buy a round or two.

Cheers!

@mreinstein
Copy link

That problem is wanting to load modules cross-origin without CORS.

Can't that same library that is already exposed via the web be made available an npm/bower/component/whatever package, and pulled into your project instead of via CORS? To me this exemplifies the differences in philosophy between the 2 projects; RequireJS's goal is to provide a solution to every conceivable way that you might possibly consider declaring dependencies. Browserify doesn't support as many of these cases, but does so without incurring the technical complexity/debt.

I don't pretend to speak for everyone. For me (and probably some others,) I'm not willing to make that tradeoff. For example, another edge case that was brought up earlier: Could I design my apps to use something like require(prefix + "/som_module"); ? Of course I could. I suspect that if I saw this kind of thing in someone else's code that I'd have to maintain, I would not be happy, trying to track down how prefix gets assigned. And the fact that it can be assigned in many convoluted ways ensures that some idiot will.

@mikeal
Copy link

mikeal commented Nov 7, 2013

@csnover it is not "the node way" to add things to core which can be accomplished by modules. Tools like browserify support AMD, they live in the ecosystem.

Being one of only a small subset of people that has probably ever experienced a true full-stack, cross-platform JS library

You are not part of a small subset. My company is full stack JS, most of our libraries run in node.js and in the client. Everything @substack and @maxogden do is the same. The entire voxeljs ecosystem is built on dual purpose libraries including all the ndarray modules.

You are only a minority in that you are doing this in a smaller ecosystem. Most of the node community is writing things that work in both places, most of the modules in npm work with browserify. This is the world we live in, you might consider joining us :)

@dylans
Copy link

dylans commented Nov 7, 2013

@mikeal, I don't like answers that essentially say "our ecosystem is big, you should suck it and do things our way, even if they're not ideal". It's a cop out, and basically saying that because it's a problem you don't care about, no one else should care. Claiming that you're right, because you're popular, isn't useful. What is your actual technical merit for arguing against what @cnsover has said? Frankly, the JavaScript module loading ecosystem existed far before Node.js and npm. That you choose to ignore Dojo and the painful lessons we learned, and later AMD, is of course your choice, but stop insulting us by saying we're a minority. Browserify does not solve the problems we need to solve every day for our users.

@csnover
Copy link

csnover commented Nov 7, 2013

Hi @mikeal,

it is not "the node way" to add things to core which can be accomplished by modules

In my response I think I was talking about “the node way”, where Node.js intentionally decides not to make any simple changes to core (and in fact, actually made such changes once but then reverted) to improve compatibility and how it negatively impacts users. You are welcome to look at it as a more positive thing, but as an AMD user, this is the perspective I have. I like Node.js, but it’s very frustrating to me because of this singular problem.

Maybe I also do not understand what “core” Node.js is, but it seems to me to include many things that could be provided by modules—crypto, ssl, http, url, punycode, and on and on. Include the fact that you can add C++ module extensions to V8, and Node.js “core” would pretty much be nothing but V8 itself if it followed “the node way”. So I obviously do not understand this perspective, except insofar that it maybe helps provide a convenient excuse to not be compatible with an inferior (but necessary for async platforms) module format.

You are not part of a small subset. My company is full stack JS, most of our libraries run in node.js and in the client.

Relative to the number of JavaScript developers there are out there, it is practically a rounding error. Node.js developers are themselves a fairly small subset of the JavaScript ecosystem.

This is the world we live in, you might consider joining us :)

A world in which responsibility to pragmatically improve cross-platform compatibility is abdicated not really a world I want to live in. Thank you for the offer, though.

Copy link

ghost commented Nov 9, 2013

@domenic ES6 modules will support cross-domain loading without CORS. Scripts loaded in this manner won't go through all of the Loader callbacks, since that would expose their source. But they'll at least still load.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment