public
Created

Can you help me understand the benefit of require.js?

  • Download Gist
require-js-discussion.md
Markdown

I'm having trouble understanding the benefit of require.js. Can you help me out? I imagine other developers have a similar interest.

From Require.js - Why AMD:

The AMD format comes from wanting a module format that was better than today's "write a bunch of script tags with implicit dependencies that you have to manually order"

I don't quite understand why this methodology is so bad. The difficult part is that you have to manually order dependencies. But the benefit is that you don't have an additional layer of abstraction.


Here's my current JS development work flow.

Development

When in development-mode, all scripts have their own tag in the DOM.

<script src="depA1/dep1-for-module-A.js"></script>
<script src="dep2-for-module-A.js"></script>
<script src="moduleA/moduleA.js"></script>
<script src="dep1-for-module-B.js"></script>
<script src="module-B.js"></script>
<script src="moduleC/module-C.js"></script>
<script src="script.js"></script>

There is no abstraction layer. This allows me to better debug individual files. The browser reads separate files, so I can debug with Developer Tools. I like how it's straight-forward.

Dependencies are basically managed right here. depA1 needs to be listed before moduleA. It's explicit.

Modules

Modules are 'transported' by attaching to the global namespace.

( function( global ) {
  var dep1 = global.depA1;
  var dep2 = global.depA2;
  function ModuleA() {
    // ...
  }
  // export
  global.ModuleA = ModuleA;
})( this );

Production

All scripts are concatenated and minified. One HTTP request on load.

<script src="site-scripts.js"></script>

The Concat + minify task is maintained separately. It's part of a build process. Makefile or what-have-you. For dependency management, the ordering of scripts matches how they were listed in the HTML.

Changing modes

This can be done easily with some sort of configuration and templating. For example, by setting prod_env config variable to true or false, the site is either in production, serving the one file, or development mode, serving every single file.

{% if prod_env %}
  <script src="site-scripts.js"></script>
{% else %}
  <script src="dep1/dep1-for-module-A.js"></script>
  <script src="dep2-for-module-A.js"></script>
  <script src="moduleA/moduleA.js"></script>
  ...
{% endif %}

Discussion

  • What benefit does require.js provide over this workflow?
  • How does require.js address minimizing HTTP requests? Is this any better than concat/minifing all the scripts?

Pretend you're not a web developer that's used to writing a dozen script tags into a webpage for a moment.

If you didn't need to know that module awesome-thing required one or more other modules, and just specifying awesome-thing actually worked, would you be happy just specifying awesome-thing or would you want to be aware of and specify all of its dependencies, their dependencies, etc?

Adding to what @cowboy said, the lots-of-script-tags, manage-my-own-dependencies approach might work for a small project, but it rapidly falls apart for anything sizeable. It's entirely plausible that a rich client-side JS app will have thousands or even tens of thousands of lines of code (or more!), and managing a codebase of that size requires small modules, one to a file.

With RequireJS, I can have as many small files as I want, and don't have to worry about keeping track of dependencies or load order -- things Just Work. The RequireJS optimization tool also allows me to make builds that go beyond the standard concat-and-minify technique: I can create built files that only load certain parts of an app at startup, and then separate built files for other parts of the app that can be loaded on demand. With the optimizer, you not only end up with a small number of HTTP requests, but you also get the ability to manage the size and shape of the JS payload.

Finally, RequireJS makes it possible to write an entire, modular app without touching window. I don't have to worry about naming conflicts, and I can choose the best name for a thing at the time I need that thing, rather than being stuck with a single, global name. Smart people will argue as to the importance of this, but I find it pleasant, and moreso when I'm working on a team.

Bottom line: having used RequireJS (and the Dojo module system before that), I have a hard time imagining developing anything even slightly non-trivial without it. It just makes all sorts of things way too easy.

To add to what @rmurphey said, "modules" in the app sense may or may not include something more than just JavaScript dependencies. AMD plugins allow me to package other types of resources with my module. For example, my typical AMD controller module may look like this:

define([
  "can",
  "jade!views/tooltip",
  "css!stylesheets/tooltip"
], function(can, view) {
  return can.Control("Tooltip", {
    render: function() {
      this.element.append(view(this.options));
    }
  });
});

In some other file…

require(["controllers/tooltip"], function( Tooltip ) {
  var tooltip = new Tooltip("#id");
});

I can be confident that when I require(['controllers/tooltip'], fn…), that it's separate view file and associated stylesheet have also been loaded. This is a great help, not only to keep your components/controllers modular, but it lends a hand when unit testing as well.

For example, let's say my unit tests depend on the tooltip component being hidden via display: none; when being initially rendered, and this style is defined in stylesheets/tooltip.css. If I simply dropped in the script tag, and the css was missing, my tests would break. With AMD, I can simply require the module, and know that all of it's deps (including the css) come along with it.

What a coincidence--I'm just this moment putting together slides about AMD and Require.js!

Dependencies are basically managed right here. depA1 needs to be listed before moduleA. It's explicit.

It's only explicit insofar as the example file names describe their dependencies. Without that, the dependency graph is flattened to a list. Complex dependencies cannot be expressed sequentially.

For dependency management, the ordering of scripts matches how they were listed in the HTML.

Require.js helps you to avoid duplicating this information across Makefiles and HTML templates.

As for minimizing requests, see @rmurphey's comments on the Require.js optimizer.

Just to add to what @murphey and @cowboy have said, the real value of AMD is not so much the "asynchronous" part as it is the "module definition" part. When dealing with large-scale projects we have to think about the experience of other developers who will be working on that code, now or sometime in the future. There are a few main wins:

It's takes so much less time for another developer to debug or change a module if they know exactly where that module's file exists in the project. Carefully ordered script tags will tell you WHAT modules are being loaded, but it won't tell you explicitly where that module is being used. With AMD the path to the module (or a string that points to the path to the module) is strictly defined at the top of the file. Developers can quickly trace functionality without having to step through a debugger.

Secondly, because the coupling points between modules are so strongly defined by the require calls it is incredibly easy to change out or modify a dependency. The example I like to use is updating to a new version of vendor code such as jQuery. With AMD, you can simply change what module is being required; the variable name used in the closure remains the same. Changing the version globally means you have to perform regression tests across the entire site. Changing it at one or a few modules at a time means the QA process can be much more incremental. Not that I would recommend running two versions of jQuery in production, but you can see the point that you have so much more control over when and how changes occur in your codebase.

Lastly, AMD generally allows us to write more cohesive code. As @rmurphey stated above, you can write smaller, more concise modules and not have to worry about keeping track of them all. This kind of abstraction really helps me identify when I'm writing the same code over and over again as well, so it's made my modules much more utilitarian and I've reduced the number of modules being loaded overall.

So, in short, AMD really provides a framework to writing code that is going to be more accessible to the next developer who comes along and touches it. You'll especially appreciate it when that developer is you six months from now and you haven't touched the code in that long.

What about single-page webapps? When you have a lot of components and you have to load them dynamically it helps a lot. You just need the path of the module to load it and you don't have to deal with its dependencies (and it's a waste of bandwidth to include all the module dependencies in it's file, while you may have already loaded them).

It's also good when you just need certain components of a medium/big size framework. You can download the whole framework and just load the components you need. You don't have to load everything that comes with the framework and you don't have to find out what are the dependencies of each component you are using. In a project I developed, for example, I needed to use the touch events module of the jQuery mobile framework (just this part). I just had to download the whole framework (all the components are AMD modules) and add 'jquery-mobile/events/touch' as a dependency of one of my modules. The events module has jquery, jquery.mobile.vmouse and jquery.mobile.support.touch as dependencies and I didn't have to know it to use the module I needed. It helped a lot to me in this case.

As someone relatively new to using requirejs, who was also skeptical about the benefits that it would bring, the biggest win has been being able to see what external modules are being used just by looking at the top of the file. It's allowed us to break our new projects down into many small pieces, and still figure out very quickly what the dependencies are of a specific piece.

Just a quick example from an action menu component:

  FinancialTransaction = require 'cs!../models/financial_transaction'
  FinancialTransactionsHelpers = require 'cs!../financial_transaction_helpers'
  FinancialTransactionsActionsTemplate = require 'text!../templates/financial_transaction_actions.html'
  MoveTransactionModal = require 'cs!./move_transaction_modal'
  UndoDelete = require 'cs!./undo_delete'

Many of these arguments aren't really for RequireJS or AMD. They are for modular components and better dependency management. I would argue that you can do this much more easily in more traditional, synchronous CommonJS frameworks that do not have AMD's asynchronous, callback hell or overhead:

https://github.com/component/component
https://github.com/substack/node-browserify

@jonathanong agree, really this is about having a tool for modularization -- for me, RequireJS fits that bill, but I'd rather see a developer pick some module/dependency management system -- AMD, CommonJS, heck, even the Rails asset pipeline -- than see them piling script tags into index.html. It seems that's the question being asked here -- "what's the matter with a bunch of script tags, anyway?" -- and the comments seem to agree that the answer is "plenty" :)

@jonathanong Agreed. CommonJS is great for projects which I can control more of the process, and generally seems cleaner than the callbacks in AMD handling. That being said, using require.js (or building your own simple AMD handler) is the approach I take when working on a project with a well established code base when I need a good modular structure without the overhead of build steps, don't have the ability to run Node, or want to be able to run a modular site without a server at all.

The asynchronous capabilities of AMD can be used for other things if written correctly. Core functions within an AMD handler can be used for state management, as well as pre-caching of other resources (images, css, JSON - although I personally only use it for JSON). These can be done other ways, but it's nice to have a single global "async" set of utility functions for a site.

Additionnally to already explained dependency management benefits, when you say "Modules are 'transported' by attaching to the global namespace", I read "Modules are polluting global namespace". Global pollution is bad, dangerous and easy to avoid with AMD.

Beautiful! I was wondering about this exact thing when I built my first simple contacts module and app wrapper. I'll have to dig into this more.

@ralphholzmann I wouldve thought you'd be using StealJS for your dependency management, i have been using it for the past 2 years but im slowly starting to look at RequireJS for its AMD and for the smaller footprint it has on a project.

in the requirejs build process, is it possible to concat / minify every CSS like StealJS does with its production.css file? Same for templates?

I also took a look at Almond for projects not needing on the fly requires.

Your feedback / comments would be appreciated ;)

Great discussion by the way, ill keep checking it!

As far as RequireJS vs other module solutions, I have yet to see a library offer such a seamless / flexible way to work with JS modules in front-end dev. It does not add a compile step to each save, or a watch script to wait for. The only thing it asks is that you add a define() wrapper to your modules, and optimize before deployment.

Save & reload.

@falzhobel It's recommended that you have a single main.css file that uses @import to reference your css files, see http://requirejs.org/docs/optimization.html#onecss

I personally recommend sass/compass, and running 'compass watch'. You can set it up so you have a main.sass file that imports your other sass files. This way, every time you save, your css will be compiled into a single file.

Mind....slowly....forming....new.....wrinkle

Thank you for all the wonderful responses. Wow, this is a helpful community.

Where this is all going, for me, is thinking about package management. I'm now understanding why Component is the way it is, integrating this sort of module system.

If I may, I'd like to take veer discussion down a related path...


What's blowing my mind:

Nested dependencies

I publish a lil' library like Vanilla Masonry. Right now, this is doing the global namespace pollution thing, but for the sake of the example, let's say it's implements define() properly

define(function() {
  function Masonry() {...}
  return Masonry;
})

Now other developers can implement the Masonry script via require.js. YAY.

Okay... so what happens if I want to define a dependency for Masonry, like Underscore?

define( [ 'underscore' ], function( _ ) {
  function Masonry() {/*...*/}
  _.extend( Masonry.prototype, /*...*/ );
  return Masonry;
})

In order to define the dependency, I need to know the file structure. But, that's up to the developer, not to the library author.

How does Require JS handle this?
Can the Masonry library be implemented without the implementor having to change the source?
Or have we crossed over into the Treacherous Valley of Package Management?

I would urge @desandro to peruse the top half of this post, which is a pretty good rundown of some things coming down the pipe in ES6; particularly the additional stuff linked in the section titled 'Where do these modules fit in with AMD?' If that wasn't enough of a spoiler, some consensus is gathering around trying to make the notion of modularity native, because many people do indeed find use out of it, that it helps them better organize their code.

To be fair, I think that the process and rationale outlined in the gist is perfectly rational; there's minimal ceremony and assumes a developer is smart enough to determine and express a proper order for a simple application. I don't think the point at which things get unmanageable approaches as rapidly as other commenters have suggested, even though that point surely does exist for exceptionally large apps. Regardless, if it's possible to load dependencies in a proper order through AMD, it's also possible through the maintenance of a flat build order, too.

But I also find myself nearly always authoring UIs atop Rails these days anyway, so I get the pipeline for free, which coincidentally does exactly what @desandro outlines between environments. I'm also writing in coffeescript and test-driving as much as possible, which if you're into, makes the concern of waiting on a watcher a bit moot, because you're probably going to resort to one anyway to run your tests, or compile your assets if you're outside Rails.

I say this knowing my situation and subsequent approach isn't universal, and I think that's just fine, and important to this discussion, because the vibe I got from this gist is that the author was feeling as though he was missing out on something obviously important that maybe he was doing wrong. AMD and libraries that operate on the pattern are simply tools that should be picked up and used if their rationale makes sense and solves a problem for you and your team. If that's not the case...

Since a lot of people already explained why modules are a good thing, I will reply the last @desandro comment

You don't need to know the file structure to consume the dependencies, I rarely put my dependencies on the top-level folder (baseUrl in RequireJS terms). The good thing about using plain strings to reference the dependencies is that they can point to anything you want, there are options like map and paths that can be used to change the module ID resolution logic, you can use it for instance to load lodash instead of underscore.js or to make some paths shorter:

requirejs.config({
  map : {
   '*' : {
     'underscore' : 'vendor/lodash' // all modules will load lodash instead of underscore
   }
  },
  paths : {
    'mout' : 'very/long/path/to/mout' // avoid typing long path all the time
  }
});

// magic.js
define(['underscore', 'mout/string/slugify'], function(_, slugify){
  // will load "vendor/lodash.js" and "very/long/path/to/mout/string/slugify.js"
});

Nowadays there is also the shim config to handle legacy code but I usually wrap everything into define() to make it easier. The umdjs repository contains some examples of how to export libs to multiple environments (AMD, node.js, CJS, browser globals), in most cases you can solve it with just a few extra lines of code at the bottom of the lib:

if (typeof define === 'function' && define.amd) {
  define(Masonry); // AMD
} else if (typeof module !== 'undefined' && module.exports) {
  module.exports = Masonry; // node.js
} else {  
  window.Masonry = Masonry; // browser global
}

AMD is good because it's flexible, but it's also more complex than node.js modules since it gives more options. I switched all my projects to AMD ~2yrs ago and I'm not looking back... no.more.awkward.namespacing, no globals, no concat issues, lots of flexibility... Some of my projects have more than 500 source files, can't see myself doing the same thing manually.. - very happy RequireJS user here.

The ability to load templates and compile them automatically during build (without changing your code) is also great - plugins are an overlooked feature and one of the biggest strengths.

PS: There are tools to convert between the different module formats (see: r.js and nodefy) the authoring format is just a detail... pick whatever works for you. (I'm biased towards AMD :P)

@desandro yup. that's why i'm all for component. it's a synchronous require.js + bower package downloader. much lower overhead, less nesting, and you don't have to worry about all the things @millermedeiros just mentioned. only difference is you would just do a var _ = require('underscore'); var slugify = require('./mout/string/slugify'); instead (assuming that's what he's doing. I'm not sure what he's doing).

only downside i can think of is the lack of a built-in, automatic build process, but that's mostly because node.js' current watch command sucks.

if you remember my post about vanilla-masonry and component, the standalone version will have AMD support automatically.

Hey @desandro to answer your question, I'd venture to say that all modules should require the most vague abstract name. Example: define(["underscore"] over define(["vendor/documentcloud/underscore"]. So long as everyone remains consistent (which they have been for package management reasons), you can easily map to the correct location in your own configuration.

One problem I have with RequireJS right now is the load path for nested dependencies that want to load modules relative to their location rather than the baseUrl. Maybe an embedded map configuration could work here, I am unsure.

Also another added benefit is that you can remap dependencies, this makes it so that you only use jQuery or lodash even if other packages are including underscore and zepto.

Example of that: https://github.com/tbranyen/backbone-boilerplate/blob/master/app/config.js#L16

Defining the dependencies of your files is more than a requirement for large scale project. There simply is no other way you can accomplish maintainability and collaboration between teams.

However i beg to defer from calling everything a module and certainly dislike AMD and its' derivatives.

There are 3 reasons for that.

AMD Is Not Focused On Solving Dependencies

It is also a way of defining how your code will be structured by forcing you to wrap it in a huge anonymous function, called the factory:

define(id?, dependencies?, factory);

Some people like this style some people don't. The main reasoning being that you don't want to leak variables to the global namespace. I understand that but the AMD pattern is not the only way to avoid global namespace pollution.

AMD Creates Closures For Each File

Since the definition of AMD requires that you wrap your code in an anonymous function this results in creating one extra closure for every file in your codebase.

We all are knowledgeable folks here and know what this means or doesn't mean.

AMD creates verbosity that's not required on production

The declaration / requirement statements are there in production code.

Whatever you state that each file requires on the top of your module, will be there in the production code. If AMD was a purely dependency management solution this should not happen. It should plainly make sure that the files are concatenated in the right order based on their dependencies and get out of the way after that.

You are sending over the wire ~1-3% more bytes, from what your JS app size is. That is too much.

What else?

Google's Closure dependency system is the most elegant solution by far. It does what it sais it does, takes care of your dependencies, nothing more and it does it very very good.

It can be ripped of the closure tools and used independently on any kind of project. It's something i plan on doing when i find a free weekend.

It doesn't dictate how you write your code. You want namespaced hierarchy? Cool! You want AMD modules? Certainly! You want something in betwen? Can do... no restrictions

And closure compiler in SIMPLE_OPTIMIZATIONS will take care and remove all the dependency statements from your production code leaving it bare as it should be.

(as i finished writing this rant i figured why not post it in my blog too)

Hey all!

FWIW, I think the format discussions are largely moot. AMD vs CJSM vs Harmony vs CoffeeScript vs TypeScript vs whatever. I know @jrburke has had some success supporting Harmony modules, and curl.js already supports native CommonJS Modules in addition to AMD (and will add other transpile-to-js languages in the future).

My first goal to tackle for 2013: allow devs to write their code and integrate 3rd-party libs written in any transpile-to-js language and -- with minimal configuration -- it'll all just work.

For the short term, I still highly recommend AMD since it's got the best browser support; the widest availability of plugins, loaders, and build tools; and the most coding flexibility (by far).

@jonathanong and @bendman: The loader handles the asynchrony for you. If you're dealing with callback hell in your AMD code, you're likely not understanding how AMD's define works.

Great discussion!

-- John

@dubbs I get your point, but what about when using external plugins' CSS files? I don't want to convert them all to .scss and then have to import them manually in my main.scss.

What I currently do with StealJS in production is that I have a compiled production.css (all my js-related styles) which I then concat within my styles.css (actual site CSS)

I'm looking for a similar approach but with RequireJS / r.js

This isn't about CSS, folks. Asset loading is a side-effect of dynamic module loading. Fun to work with, but a complete distraction when talking about the merits of AMD.

@thanpolas I was glad to read a fresh perspective on Require.js in this thread, but I'm wondering if you can elaborate on a few things.

We all are knowledgeable folks here and know what this means or doesn't mean.

I know what this means generally, but I'd like to hear how closure encapsulation is relevant in this context.

You are sending over the wire ~1-3% more bytes, from what your JS app size is. That is too much.

Putting aside the issue of whether 1-3% is "too much", can you qualify these statistics?

@rpflorence I know this isn't about CSS, but I think its relevant in the end for the build process of AMD. I thought it'd be a good idea to bring it to the table even tho it's not the main subject ;)

@jugglinmike,

I know what this means generally, but I'd like to hear how closure encapsulation is relevant in this context.

Well i'll try to reiterate the literature over closures in a very short spin... Creating closures requires responsibility on the developer's part in order to properly handle all the variable declarations that happen inside and outside of a closure. The main reason being giving the Javascript engine a chance to Garbage Collect.

Massive usage of closures will consequently result in memory leaks, if not from you, statistically it's a certainty it'll happen in projects out there, even in ones you might use as third-party dependencies.

Putting aside the issue of whether 1-3% is "too much", can you qualify these statistics?

I can't be exact here, I am mostly guessing but here's a rough estimation...

The requirejs runtime that's included in the production shipped js asset should be around ~5kb of minified code that gzipped can get compressed down to ~2kb.

But the real burden comes from the requirement declarations overhead on each and every file. The smallest declaration one can have is ~57bytes:

define('module', function(require, exports, module) { });

And looking from real life projects, the definitions can get to really crazy sizes that are north of 1kb. An average file with 5 requirements would equal ~130bytes:

define('module', ['moduleA', 'moduleB', 'moduleC', 'moduleD', 'moduleE'],
  function(moduleA, moduleB, moduleC, moduleD, moduleE)) {});

We are talking large-scale projects here so a modest estimation of files included is 1,000. That results in 130kb of declaration overhead on your codebase. Minify and gzip that and you can't get lower than 20kb.

Would you agree with my assumptions?

@thanpolas

I don't think 1-3% overhead is a worthy argument against Require.js or any module system. If it increases developer productivity by more than 5% then I would argue it's worth it, though the real question is 5% more efficient than what. 99% of the time, there are better things to optimize.

I don't think the closure argument is valid either since modules that work globally (like jQuery) use closures anyways.
And memory leaks due to closures shouldn't be the responsibility of the module system, it should be the responsibility of the developers.

That being said, I think dependencies should be handled by the build process, not during run time, which would reduce this overhead.

@jonathanong my post right before yours gives a more in depth answer to the exact issues you raise.

@thanpolas i'm not saying you're wrong. In fact, I agree. i just don't think these are a significant arguments against using Require.js.

AMD pattern is what Promise system should have been if it was designed for greater purpose:

Synchronizing arrival of multiple asynchronous resources

As a module loader it's just another module loader. Does not stand out there except for extra elegant dependency declaration mechanism.

What AMD loaders are good at is waking you up when all of the pieces - text templates, business logic, data - arrive into browser and are ready for use.

AMD's genius is in making the timing of things easy. Without that, you would be still slapping together "thread ready" semaphores and adding Promises by hand.

Managing your own dependencies is not bad (as the original author pointed out) as long as

  • you know what you're doing
  • aren't afraid to use tools like YSlow
  • the project is small / less complex

When you have a large project with multiple javascript includes and the page itself is composed in several 'layers' (with a master page and content pages), a more robust solution is called for.

AMD is just such a solution.

I provides for:

  • a flexible script inclusion pattern
  • lazy loading (or 'just in time' loading depending on how you're thinking about it)
  • dependency checking that is much less brittle than 'manual ordering'
  • 'component' or 'modular' centric design.

When working with a medium to large team of developers, this list of benefits will give you pause.

For the record, you CAN debug RequireJS modules (also Dojo's). I did that in my last project.

When the project was finished, I realized that even n00b programmers created well grained modules and reused them when possible.

Some have implied in this thread that require.js can also load CSS files, yet the require.js FAQ says no. If it cannot, this is a pretty huge elephant in the room. I want something that will include ALL my dependencies, not just JS files.

@salient1 for CSS includes you can use a plugin like Require-CSS. There is also LESS support and it works with builds.

Hi all, I finished a proof of concept for a new way of dependency management. It's a working solution that allows discreet, unobtrusive and powerful dependency management. I'll quote from the README of Mantri.

Mantri is...

  • A Robust and discreet Dependency Management System.
  • Synchronous. Everything is done before DOMContentLoaded event triggers.
  • A Grunt plugin.
  • A command line tool (soon).
  • Cross-browser.

Mantri does not...

  • Dictate how you write your code. Vanilla JS, AMD, commonJS, knock yourselves out.
  • Need any runtime on your production file.
  • Need any dependency declarations in your production file.
  • Have any problem moving around JS files or folders.
  • Have any problem working with other dependency systems.
  • Polute your namespace. But you are free to if you want.

... I'll be going through documentation and a blog post announcing the tool in the near future and would welcome any early feedback you may have.

Good reading... thanks to all.

In conclusion?

Great reading!
I am interested in "changing modes" section in the original post. If we want to live debug in production environment if somethingis not working right, how can this be achieved using requireJS? Any ideas will be great.

Is this not a good candidate for a Stackoverflow question?

Yet another issue with require.js and the like: The DOMContentLoaded-event fires as soon as all the originally embedded scripts have loaded and the original DOM of the document is present. Since injected files or scripts loaded via XHR are not included in this, they will not be present at this time. This results in breaking the load process and deferring the time the page is ready, effectively leading to longer page load and parsing times. Even more, the scripts will compete for network-slots with other embedded assets (images, fonts, etc), slowing them down yet again.

So JS is NOT where you should manage your dependencies, use build-tools or server-side includes to manage them.
Regarding page load times, attempting to handle dependancies via JS always comes at the expense of your users and the general performance of your site. Especially, if the actual page is to be rendered via JS, resulting effectively in a second load cycle for the assets inlined by the generated code. Using some partials loaded via XHR would result in a third cycle and so on.

(Remember late 1990s, how fast the web used to be then?)

@masswerk - I think you should try to work more with require.js. Your statements are a bit flawed.
With require, your main/app module waits for dependencies to load before doing any work, so the Dom content loaded event isn't an issue (usually I put my dom ready event registry inside my main module)

Usually this isn't an issue when in production, as require.js is a build tool, and will concat your js into 1 file. (usually you'll end up with multiple xhr script loading only when in development).

The asynchronous loading aspect is beneficial when you have a really large site, and want to load features/packages only when needed. E.g. I have a labs site, which is a SPA and has a webgl demo link. I don't include Threejs in my minified js, as I only want to load it when someone clicks on my webgl demo link.

Take a look at the optimization documentation:
http://requirejs.org/docs/optimization.html

@jasonmcaffee - Hm, I've seen some require.js in production using still JS-managed dependencies. On the other hand, using a completely other load-cycle-model in testing than in production isn't the best either. (Who is bothering about this anymore at all? E.g.: Developers have eventually become used to jQuery and $(document).ready in such extent that they even don't know that they could have loaded their data already in the head section and could have all things calculated and ready for some time even before the event fires. And using tools like require.js keeps you even more from considering these things, as they're triggering late while in the development phase.)
Fact is, we're all using computers that are beasts, only the NSA would have access to just some 20 years ago, and are using the best optimized software that ever existed (JS engines). And using this, there are just too many projects, where you can watch pages rendering like it were the mid 1990s.


Here's a model to speed up web apps:
1) Include a source-tag to the data-set server-side
2) Use something like Plexer.js ( http://www.masswerk.at/plexer ) to spawn your model-logic to a concurrent process
3) Have your model ready at DOMContentLoaded
4) Have more tiny scripts (they will be cached and will be available instantaneously) and not a single, compiled one for every page-type (so the browser will have to load any additional bits only)

A little off topic, but an interesting project I've been using, is jam.js. Been loving it for managing JS libraries and packaging them to be used. Uses the Require framework, but adds some functionality to it. Figured some people on here may find it interesting.

Most medium-sized web apps (which is what most of us are building, right?!) do not need the syntactic vomit that RequireJS sprays all over the project. It is not hard to get even several dozen

Please sign in to comment on this gist.

Something went wrong with that request. Please try again.