secret
Last active

  • Download Gist
modules.md
Markdown

WARNING: This document reflects the state of the modules proposal as presented to TC39 in the March meeting. It is expected to change somewhat in the months ahead, most notably to add support for anonymous exports. I will try to keep this up to date as the module proposal continues to solidify. I will also add more use-cases as I continue to collect them.

ES6 Modules

This document covers a number of use-cases covered by existing module implementations in JavaScript, and how those use-cases will be handled by ES6 modules.

It will also cover some additional use-cases unique to the ES6 module system.

Terminology

For those unfamiliar with the current ES6 module proposal, here is some terminology you should understand:

  • module: a unit of source code with optional imports and exports.
  • export: a module can export a value with a name.
  • imports: a module can import a value exported by another module by its name.
  • module instance object: an instance of the Module constructor that represents a module. Its property names and values come from the module's exports.
  • Loader: an object that defines how modules are fetched, translated, and compiled into a module instance object. Each JavaScript environment (the browser, node.js) defines a default Loader that defines the semantics for that environment.

Imports and Exports

Let's start with the basic API of ES6 modules:

// libs/string.js

var underscoreRegex1 = /([a-z\d])([A-Z]+)/g,
    underscoreRegex2 = /\-|\s+/g;

export function underscore(string) {
  return string.replace(underscoreRegex1, '$1_$2')
               .replace(underscoreRegex2, '_')
               .toLowerCase();
}

export function capitalize(string) {
  return string.charAt(0).toUpperCase() + string.substr(1);
}
// app.js

import { capitalize } from "libs/string";

var app = {
  name: capitalize(document.title)
};

export app;

This illustrates the basic syntax of ES6 modules. A module can export named values, and other modules can import those values.

Avoiding Scope Pollution

When working with a module with a large number of exports, you may want to avoid adding each of them as top-level names of another module that wants to import it.

For example, consider an API like Node.js fs module. This module has a large number of exports, like rename, chown, chmod, stat and others. With the ES6 module API, it is possible to bring in the module as a single top-level name that contains all of the module's exports.

import "fs" as fs;

fs.rename(oldPath, newPath, function(err) {
  // continue
});

Concatenation

In the example above, the modules were loaded based on their location on the file system. This is how the default Loader for the browser will work.

For production applications, you will want to concatenate the files on the file system into a single file. ES6 modules handle this case by providing a literal way to define a module:

module "libs/string" {
  var underscoreRegex1 = /([a-z\d])([A-Z]+)/g,
      underscoreRegex2 = /\-|\s+/g;

  export function underscore(string) {
    return string.replace(underscoreRegex1, '$1_$2')
                 .replace(underscoreRegex2, '_')
                 .toLowerCase();
  }

  export function capitalize(string) {
    return string.charAt(0).toUpperCase() + string.substr(1);
  }
}

module "app" {
  import { capitalize } from "libs/string";

  var app = {
    name: capitalize(document.title)
  };

  export app;
}

Modules defined using this syntax will be available to other modules, and will not needed to be fetched through the Loader.

Modules in Non-Default Locations

In web applications, while many modules may be concatenated into a single file for production use, some modules, like jQuery, may be loaded off of a CDN.

It is possible to override the default Loader hooks to specify where to load a module from, but ES6 modules provide a simple API for mapping modules to their physical location.

System.ondemand({
  "https://ajax.googleapis.com/jquery/2.4/jquery.module.js": "jquery",
  "backbone.js": ["backbone/events", "backbone/model"]
});

The first line in the example specifies that the jquery module can be found at https://ajax.googleapis.com/jquery/2.4/jquery.module.js.

The second line specifies that backbone/events and backbone/model can both be found at backbone.js.

You can call System.ondemand as many times as you want, so libraries can provide a snippet of code for people to use in order to import their libraries.

The Compilation Pipeline

The next several sections deal with various use-cases involving the compilation pipeline.

Here is a high-level overview of the process.

The dotted line between fetch and translate reflects the fact that process of retrieving the source is asynchronous.

Stricter Mode (Linting)

Linting tools are a crucial part of a JavaScript developer's workflow, but they are currently used primarily via a compilation toolchain that presents errors in the terminal.

Using the Module Loader's translate hook, it is possible to add additional static checks that are presented to the user as SyntaxErrors.

import { JSHINT } from "jshint";
import { options } from "app/jshintrc"

System.translate = function(source, options) {
  var errors = JSHINT(source, options), messages = [options.actualAddress];

  if (errors) {
    errors.forEach(function(error) {
      var message = '';
      message += error.line + ':' + error.character + ', ';
      message += error.reason;
      messages.push(message);
    });

    throw new SyntaxError(messages.join("\n"));
  }

  return source;
};

If the linter returns errors, the translate hook raises a SyntaxError and the Loader pipeline will stop, throwing the exception as if it was a true SyntaxError.

Importing Compile-to-JavaScript Modules (CoffeeScript)

Increasingly, modules are written using languages that compile to JavaScript.

The translate hook provides a way to translate source code to JavaScript before it is loaded as a module.

System.translate = function(source, options) {
  if (!options.path.match(/\.coffee$/)) { return; }

  return CoffeeScript.translate(source);
};

In this example, any modules ending in .coffee will be translated from CoffeeScript to JavaScript, and the rest of the pipeline will just see the compiled JavaScript.

Verification and Translation

Some other compilers, like TypeScript and restrict mode perform both compile-time verification and source translation.

The above techniques could be combined to produce seamless in-browser support for such libraries.

Using Existing Libraries as Modules

The existing jQuery library is distributed as a library that "exports" the jQuery name onto the global object.

It should be possible to import existing libraries without having to modify the original source, like this:

import { jQuery } from "jquery";

jQuery(function($) {
  $(".ui-button").button();
});

The final hook in the process, link can be used to manually process a source file into a Module object.

In this case, we could configure the Loader to extract all properties written to window.

function extractExports(loader, original) {
  source =
    `var exports = {};
    (function(window) {
      ${source};
    })(exports);
    exports;`

  return loader.eval(source);
}

System.link = function(source, options) {
  if (options.metadata.type === 'legacy') {
    return new Module(extractExports(this, source));
  }

  // returning undefined will result in the normal
  // parsing and registration behavior
}

In order to make it easy for the link hook to decide whether it should use custom linking logic, the resolve hook can provide metadata for the module that will be passed to the following hooks.

In this case, you can keep a list of which modules are "legacy" and populate the metadata with that information in resolve:

var legacy = ["jquery", "backbone", "underscore"];

System.resolve = function(path, options) {
  if (legacy.indexOf(path) > -1) {
    return { name: path, metadata: { type: 'legacy' } };
  } else {
    return { name: path, metadata: { type: 'es6' } };
  }
}

Importing AMD Modules from ES6 Modules

Similarly, you may want to import an AMD module's exports in an ES6 module.

Consider a simple AMD module for the string formatting example above:

// libs/string.js

define(['exports'], function(exports) {
  var underscoreRegex1 = /([a-z\d])([A-Z]+)/g,
      underscoreRegex2 = /\-|\s+/g;

  exports.underscore = function(string) {
    return string.replace(underscoreRegex1, '$1_$2')
                 .replace(underscoreRegex2, '_')
                 .toLowerCase();
  }

  exports.capitalize = function(string) {
    return string.charAt(0).toUpperCase() + string.substr(1);
  }
});

To assimilate this module, you could use a similar technique to the one we used above for jQuery:

var amd = ["string-utils"];

// Resolve 
System.resolve = function(path, options) {
  if (amd.indexOf(path) > -1) {
    return { name: path, metadata: { type: 'amd' } };
  } else {
    return { name: path, metadata: { type: 'es6' } };
  }
};

function extractAMDExports(loader, source) {
  var loader = new Loader();
  loader.eval(`
    var module;
    var define = function(deps, callback) {
      module = { deps: deps, callback: callback };
    };
    ${source};
    module;
  `);

  // Assume synchronously available dependencies. See below
  // for a discussion of async dependencies.
  var exports = {};
  var deps = module.deps.map(function(name) {
    // AMD uses a special dependency named `exports` to
    // collect exports.
    if (name === 'exports') { return exports; }
    else { return loader.get(name); }
  });

  callback(deps);
  return exports;
}

System.link = function(source, options) {
  if (options.metadata.type === 'amd') {
    return new Module(extractAMDExports(this, source));
  }
}

To be clear, the particular implementation here is simple, and a real approach to AMD assimilation would be more complicated. This should provide some idea of what such an approach would look like.

Importing Node Modules from ES6 Modules

The approach to importing node modules from ES6 modules is similar. Consider a node version of the above module:

var underscoreRegex1 = /([a-z\d])([A-Z]+)/g,
    underscoreRegex2 = /\-|\s+/g;

exports.underscore = function(string) {
  return string.replace(underscoreRegex1, '$1_$2')
               .replace(underscoreRegex2, '_')
               .toLowerCase();
}

exports.capitalize = function(string) {
  return string.charAt(0).toUpperCase() + string.substr(1);
}

You'd override the hooks in a similar way:

var node = ["string-utils"];

// Resolve 
System.resolve = function(path, options) {
  if (node.indexOf(path) > -1) {
    return { name: path, metadata: { type: 'node' } };
  } else {
    return { name: path, metadata: { type: 'es6' } };
  }
};

function extractNodeExports(loader, source) {
  var loader = new Loader();
  return loader.eval(`
    var exports = {};
    ${source};
    exports;
  `);
}

System.link = function(source, options) {
  if (options.metadata.type === 'node') {
    return new Module(extractNodeExports(this, source));
  }
}

Importing From Multiple Non-ES6 Modules

To import from all three of these external module systems together, you would write a resolve hook that would store off the type of module in the context, and then use that information to evaluate the source appropriately in the link hook.

To make this process easier, a JavaScript library like require.js, built for the ES6 loader, could provide conveniences for registering the type of external modules and assimilation code for link.

Import a "Single Export" From a Non-ES6 Module

Some external module systems support modules that have a single export, rather than a number of named exports.

The techniques described above could be used to register that single export under a conventionally known name.

Consider the following "single export" module using node-style modules:

// string-utils/capitalize.js

module.exports = function(string) {
  return string.charAt(0).toUpperCase() + string.substr(1);
}

In order to support using this module in an ES6 module, a loader can create a conventional name for the export that ES6 modules can import.

In this example, we will name the export exports for consistency with existing node practice. Once we have done this, ES6 modules will be able to import the module:

// app.js

import { exports: capitalize } from "string-utils/capitalize";

console.log(capitalize("hello")) // "Hello"

Here, we are renaming the conventionally named exports to capitalize.

In order to achieve this, we will augment the earlier node assimilation code to handle module.exports = semantics.

function extractNodeExports(loader, source) {
  var loader = new Loader();
  var exports = loader.eval(`
    var module = {};
    var exports = {};
    ${source};
    { single: module.exports, named: exports };
  `);

  if (exports.single !== undefined) {
    return { exports: exports.single }
  } else {
    return exports.named;
  }
}

System.link = function(source, options) {
  if (options.metadata.type === 'node') {
    return new Module(extractNodeExports(this, source));
  }
}

A similar approach could be used to allow assimilated AMD modules to have a "single export".

Importing an ES6 Module From a Node Module

When using a node module, we would want to be able to import any other module, regardless of the source.

One major benefit of the above approaches to importing non-ES6 modules is that it means that the standard System.get will be able to load them.

This means that it's easy to support require in a node module: just alias it to System.get.

function extractNodeExports(loader, source) {
  var loader = new Loader();
  var exports = loader.eval(`
    var module = {};
    var exports = {};
    var require = System.get;
    ${source};
    { single: module.exports, named: exports };
  `);

  if (exports.single !== undefined) {
    return { exports: exports.single }
  } else {
    return exports.named;
  }
}

Importing an AMD Module With Asynchronous Dependencies

In the above examples, we assumed that all dependencies in external modules are available synchronously, so we could use System.get in the link hook.

AMD modules can have asynchronous dependencies that can be determined without having to execute the module.

For this use-case, you can return (from link) a list of dependencies and a callback to call once the Loader has loaded the dependencies. The callback will receive the list of dependencies as parameters and must return a Module instance.

var amd = ['string-utils'];

System.resolve = function(path, options) {
  if (amd.indexOf(path) !== -1) {
    options.metadata = { type: 'amd' };
  } else {
    options.metadata = { type: 'es6' };
  }
};

System.link = function(source, options) {
  if (options.metadata.type !== 'amd') { return; }

  var loader = new Loader();
  var [ imports, factory ] = loader.eval(`
    var dependencies, factory;
    function define(dependencies, factory) {
      imports = dependencies;
      factory = factory;
    }
    ${source};
    [ imports, factory ];
  `);

  var exportsPosition = imports.indexOf('exports');
  imports.splice(exportsPosition, 1);

  function execute(...args) {
    var exports = {};
    args.splice(exportsPosition, 0, [exports]);
    factory(...args);
    return new Module(exports);
  }

  return { imports: imports, execute: execute };
};

Returning the imports and a callback from link allows the link hook to participate in the same two-phase loading process of ES6 modules, but using the AMD definition to separate the phases instead of ES6 syntax.

Importing a Node Module By Processing requires

Because node modules use a dynamic expression for imports, there is no perfectly reliable way to ensure that all dependencies are loaded before evaluating the module.

The approach used by Browserify is to statically analyze the file first for require statements and use them as the dependencies. The AMD CommonJS wrapper uses a similar approach.

The link hook could be used to analyze Node-style packages for require lines, and return them as imports.

By the time the execute callback was called, all modules would be synchronously available, and aliasing require to System.get would continue to work.

import { processImports } from "browserify";

System.link = function(source, options) {
  var imports = processImports(source);

  function execute() {
    return new Module(extractNodeExports(source));
  }

  return { imports: imports, execute: execute};
};

Of course, this means only works as long as no requires are used with dynamic expressions, in a conditional, or in a try/catch, but those are already limitations of systems like Browserify.

Interoperability in General

Let's review the overall strategy used for assimilating non-ES6 module definitions:

  • Non-ES6 modules can be loaded through the Loader by overriding the resolve and link hooks.
  • Non-ES6 modules can asynchronously load other modules by return imports from link and synchronously through System.get.

This means that all module systems can freely interoperate, using the Loader as an intermediary.

For example, if an AMD module (say, 'app'), depended on a Node-style module (say, 'string-utils'):

  1. When loading app, the link hook would return { imports: ['string-utils'], execute: execute }.
  2. This would cause the Loader to attempt to load 'string-utils', before it would call back the provided execute callback.
  3. The Loader would fetch string-utils and evaluate it using the Node-style link hook.
  4. Once this is done, the provided execute callback would run, receiving the string-utils Module as a parameter.
  5. The execute callback would then return a Module.

This is just an illustrative example; any combination of module systems could freely interoperate through the Loader.

A Note on "Single Export" Interoperability

Many of the existing module systems support mechanisms for exporting a single value instead of a number of named values from a module.

At the current time, ES6 modules do not provide explicit support for this feature, but it can be emulated using the Loader. One specific strategy would be to export the single value as a well-known name (for example, exports).

Let's take a look at how a Loader could support a Node-style module using require to import the "single export" of another Node-style module.

This same approach would support interoperability between module systems that support importing and exporting of single values.

We'll need to enhance the previous solution we provided for this scenario:

var isSingle = new Symbol();

function extractNodeExports(loader, source) {
  var loader = new Loader();
  var exports = loader.eval(`
    var module = {};
    var exports = {};
    var require = System.get;
    ${source};
    { single: module.exports, named: exports };
  `);

  if (exports.single !== undefined) {
    return { exports: exports.single, [isSingle]: true };
  } else {
    return exports.named;
  }
}

System.link = function(source, options) {
  if (options.metadata.type === 'node') {
    return new Module(extractNodeExports(this, source));
  }
}

Here, we create a new unique Symbol that we will use to tag a module as containing a single export. This will avoid conflicts with Node-style modules that export the name exports explicitly.

Next, we will need to enhance the code that we have been using for Node-style require. Until now, we have simply aliased it to System.get. Now, we will check for the isSingle symbol and give it special treatment in that case.

// this assumes that the `isSingle` Symbol is in scope
var require = function(name) {
  var module = System.get(name);
  if (module[isSingle]) {
    return module.exports;
  } else {
    return module;
  }
}

This same approach, using a shared isSingle symbol, could be used to support interoperability between AMD and Node single exports.

As described earlier, ES6 modules would use import { exports: underscore } from 'string-utils/underscore'.

Configuration of Existing Loaders

The requirejs loader has a number of useful configuration options that its users can use to control the loader.

This section covers a sampling of those options and how they map onto the semantics of the ES6 Loader. In general, the compilation pipeline provides hooks that can be used to implement these configuration options.

Base URL

The requirejs loader allows the user to configure a base URL for resolving relative paths.

In the default browser loader, the base URL will default to the page's base URL. The default System.resolve will prefix that base URL and append .js to the end of the module name (if not already present).

The browser's default Loader (window.System) will also include a baseURL configuration option that controls the base URL for its implementation of resolve.

JavaScript code could also configure the Loader's resolve hook to provide any policy they like:

var resolve = System.resolve;

System.resolve = function(name, ...args) {
  if (name.match(/fun/)) {
    return `/assets/javascripts/${name}.js"
  }
  return resolve(name, ...args);
};

URL Arguments

Similarly, the requirejs loader allows the specification of additional URL arguments. This could also be handled by overriding the resolve hook.

var resolve = System.resolve;

System.resolve = function(...args) {
  return resolve(name, ...args) + "?bust=" + (new Date().getTime());
};

Timeouts

The requirejs loader allows the specification of a timeout before rejecting the request.

With the ES6 Loader, the fetch hook can be overridden to reject the fetch after some time has passed.

var fetch = System.fetch;

System.fetch = function(url, options) {
  setTimeout(function() {
    options.reject("Timeout");
  }, 5000);

  fetch(url, options);
};

Support for Legacy Modules

The requirejs loader provides a mechanism for declaring how a legacy module should be interpreted:

requirejs.config({
  shim: {
    backbone: {
      deps: ['underscore', 'jquery']
      exports: 'Backbone'
    },
  }
});

The example above under Using Existing Libraries as Modules shows one approach to this problem. That approach should work generically, without having to list a specific export name.

The link hook provides a way to define dependencies for legacy modules.

var config = {
  backbone: {
    deps: ['underscore', 'jquery'],
    exports: ['Backbone']
  }
}

function executeCallback(source, exportNames) {
  System.eval(source);
  var exports = {};
  exportNames.forEach(function(name) {
    exports[name] = System.global[name]
  });
  return new Module(exports);
}

System.link = function(source, options) {
  if (!config[options.normalized]) { return; }

  var { deps, exports: exportNames } = config[options.normalized];

  if (moduleConfig) {
    return {
      imports: moduleConfig.deps,
      execute: executeCallback(source, exportNames);
    }
  }
};

Referencing Modules in HTML

In Ember.js, Angular.js, and other contemporary frameworks, JavaScript objects are referenced in HTML templates:

<!-- ember.js -->
{{#view App.FancyButton}}
<p>Fancy Button Contents</p>
{{/view}}

Here, the app is asking Ember.js to render some HTML defined in an App.FancyButton constructor. Note that Ember encourages the use of a global namespace for coordination between JavaScript and HTML templates.

<!-- angular -->
<button fancy-button>
  <p>Fancy Button Contents</p>
</button>

Here, the app is asking Angular.js to replace the <button> with some content defined in a globally registered fancy-button directive.

Both Angular and Ember both use globally registered names to define controller objects to attach to parts of the HTML controlled by the framework.

<!-- ember -->
{{control "fancy-button"}}

Here, the app is asking Ember.js to render some HTML defined in an App.FancyButtonView and use an instance of the App.FancyButtonController as its controller. Again, Ember is relying on a globally rooted namespace for coordination.

<!-- angular -->
<div ng-controller="TodoCtrl">
  <span>{{remaining()}} of {{todos.length}} remaining</span>
</div>

Here, the app is asking Angular to use a globally rooted object called TodoCtrl as the controller for this part of the HTML. In Angular, this controller is used to control the scope for data-bound content nested inside of its element.

To handle the kind of situation where a module is referenced by a String and needs to be looked up dynamically, ES6 modules provide an API for looking up a module at runtime.

System.get('controllers/fancy-button');

Systems like Ember or Angular could use this API to allow their users to reference a module's exports in HTML.

In the first Ember example, instead of referencing a globally rooted constructor, the HTML would reference a module name:

<!-- ember.js -->
{{#view views/fancy-button}}
<p>Fancy Button Contents</p>
{{/view}}

And the module would look like:

// views/fancy-button.js
import { View } from "ember";

export let view = View.extend({
  // contents
});

The second Angular example could be rewritten as:

<!-- angular -->
<div ng-controller="controllers/todo">
  <span>{{remaining()}} of {{todos.length}} remaining</span>
</div>

And the JavaScript:

// controllers/todo.js

export function Controller($scope) {
  // contents
}

The general pattern is to switch from globally rooted namespaces to named, registered modules. System.get provides a way to dynamically look up already loaded modules.

Creating Modules from HTML

The new Web Components specification provides a way to create a JavaScript constructor through HTML:

<element extends="button" name="x-fancybutton" constructor="FancyButton">
  <script>
    FancyButton.prototype.razzle = function () {
    };
    FancyButton.prototype.dazzle = function () {
    };
  </script>
</element>
// app.js

var b = new FancyButton();
b.textContent = "Show time";
document.body.appendChild(b);
b.addEventListener("click", function (event) {
    event.target.dazzle();
});
b.razzle();

Here, the <element> tag is creating a globally rooted name for the constructor.

The specifics will probably vary in practice, but something like this could work:

<element extends="button" name="x-fancybutton" module="web/x-fancybutton">
  <script>
  // automatically imports Element from web/x-fancybutton
  Element.prototype.razzle = function () {
  };
  Element.prototype.dazzle = function () {
  };
  </script>
</element>
// app.js

import { Element: FancyButton } from "web/x-fancybutton"

var b = new FancyButton();
b.textContent = "Show time";
document.body.appendChild(b);
b.addEventListener("click", function (event) {
    event.target.dazzle();
});
b.razzle();

This is a great writeup, thanks. Im a bit nervous about how the single-exports works. It appears that by your example the variable name on single exports by default inherits the name the module defines? or is it defined based on the way it is imported? Its not clear to me what defines this function as capitalize.

import { exports: capitalize } from "string-utils/capitalize";

console.log(capitalize("hello")) // "Hello"

@sintaxi:

It is expected to change somewhat in the months ahead, most notably to add support for anonymous exports.

This proposal is mostly about the semantics and the loader pipeline; syntax modifications, especially to support anonymous exports, are upcoming (and looking good!).

A special exports export is a really good idea so long as there is accompanying syntax support for something shorter and less bulky than:

import { exports: capitalize } from "string-utils/capitalize";

which from talking with @domenic is outside the scope of this particular document. This way we can export single-exports from values, which is the only thing worth using so I can finally ignore everything else in the spec and only use that form.

One thing we've been experimenting with in browserify lately (mostly all done by @thlorenz) is inline base64-encoded source maps so each transform can optionally append an inline base64-encoded sourceContentsURL on each file with offsets that are file-local and then at the end the browser-pack step moves all the file-local sourceContentsURLs into a single sourceContentsURL for the whole bundle. This approach lets transforms be written as standalone mappings that only need to concern themselves with local transformations and local map offsets without extra plumbing to communicate with the final source map compiler step. This is something that browser vendors should make sure to include when implementing ES6 modules.

@domnic thanks. This looks like something I can work with but just as @substack mentions I will basically write my modules as single exports and ignore the rest. What concerns me, and is often ignored is the cognitive overhead of requiring all the dependencies in different ways depending on the opinions of the library author. All examples show only one library being imported. What happend when I have a dozen dependencies and all my imports look different. Seems like a lot of mental energy wasted. Take a look at what it looks like if you are to do the various imports based on this document. This is a path to insanity if you ask me.

import { capitalize } from "libs/string";
import "fs" as fs;
import { JSHINT } from "jshint";
import { options } from "app/jshintrc"
import { jQuery } from "jquery";
import { exports: foo } from "som-utils/foo";
import { processImports } from "browserify";
import { exports: underscore } from 'string-utils/underscore'
import { View } from "ember";
import { Element: FancyButton } from "web/x-fancybutton"

I'm not sure if I missed this, but is it possible to split declaration of a module into multiple files? Let's say jQuery 3.0 comes out as an ES6 module. Do developers write each component in a separate file, with a module 'jquery' header? Or all files are concatenated into one with the module on top of it ?

Nice work!

One additional feature you could mention from requirejs is loader plugins, for loading non-js dependencies like templates, json, text, css, etc. The translate hook appears to be flexible enough to handle even these use cases, which is really awesome.

I'm also looking forward to simpler imports of the single export, that's pretty much all I use in my node modules as well.

Here's another use case you may not have considered: loading many files where the order is not important and a glob/pattern match would be very convenient. Some examples:

import { jQuery } from "jquery";
import * from "jquery/plugins/**/*.js"

or

import { angular } from "angular";
import * from "controllers/**/*.js";
import * from "directives/**/*.js";

In both these examples the many files loaded don't care about loading order, so long as jQuery/Angular were loaded first. They also don't actually export anything, but rather add augment an existing object. Meaning, jQuery plugins add to the jQuery object, Angular controllers/directives add to your angular module:

angular.module('myApp').controller('ControllerOne', function(){...})

I made an experimental requirejs loader plugin - requirejs-glob - that enables this with requirejs, but you have to run a server-side piece during development for the pattern matching so it's not ideal. Will ES6 modules either support patterns or will the hooks be flexible enough to be able to add it?

can anyone please explain to me why the additional syntax like import { //... } and exports function is a good idea?

One thing I worry about with the System.translate CoffeeScript example is if you have a compile-to-JS language which compiles asynchronously.

I'd like to be able to specify polyfills whose file names would give clues as to required features to detect for--conditionally avoiding a load if the object referenced within the file name is already present in the user's browser (whether this is for use with a Module polyfill like https://github.com/ModuleLoader/es6-module-loader that might work in older browsers now or also for future polyfilling against future specs).

For example,

// There is no second argument in the callback as a genuine polyfill
// is of course meant by design to alter already global objects to
// express code in a manner that can be well understood by those
// familiar with standards.
System.import(['./config', 'polyfill!Array.prototype.map'], function (config) {});

It looks like I can do this by using "normalize" and "fetch" (at least it's working for me with the ES6 Module Loader), but just wanted to express the use case, as I think it is a common and important situation where one wishes to avoid stuffing a bunch of polyfill code or polyfill detection code within one's modules or write shims which troubles readers of my code to learn a new API (and have their users' browsers always load a shim file even if already supported natively).

I am concerned, however, whether modules can themselves do conditional loading by leveraging the configuration of "fetch", etc., or otherwise it would sadly seem to eliminate the possibility of using conditional polyfills within modules.

(And it'd be super cool if the modules might support plugins formally in general and such a polyfill plugin concept in particular (in at least a simple manner) given that such polyfills are likely going to continue being a need so long as JavaScript and its platform continues to develop. Also perhaps reserving certain meta-data keys or values.)

While off topic, but related to providing a greater locus of control to JavaScript and reducing boilerplate with built-in functionality as modules provide--how about a new content type (and default file extension to go with it) for JavaScript that can avoid loading HTML entirely but which assumes a DOM tree pre-built like the following, allowing the script to handle any setting of document.title, document.body, stylesheet loading (or CSSOM setting), etc.

<!DOCTYPE html>
<html>
<head><meta charset="utf-8" /></head>
<body><!--<script src="pretend_I_am_executing_here.js"></script>--></body>
</html>

Yes, it will not work without scripting (and a supporting browser), but many sites are already depending on such... As far as the "declarative" argument in favor of HTML--one can of course just as easily write declarative JavaScript (or JSON), so that is not a compelling argument against. Assistive technologies are no doubt progressing to better handle JavaScript, so that does not seem to me to be a perpetual barrier to such a universal convenience either.

Besides, when one wishes to use raw JavaScript for templating (as through JsonML or Jamilih), there is even less reason to have to deal with HTML and its closing tags, etc. (and with a CSSOM library, one can overcome the lack of variables and such in CSS), allowing one to work purely in (succincter) JavaScript.

One other thought--how about a configure() method which can set up initial configuration options which are transmitted down the line (including to normalize()), etc.? This could allow simple middleware-type libraries to define themselves purely with System methods and avoid requiring their users to set global configuration objects (or load and instantiate their library code beyond adding a script tag or an import) while still allowing reuse by different library implementations. And it wouldn't need to be called on a per-import basis.

Is there any way via the pipeline to get access to the Module instance after it is fully realized (regardless of the way it was linked), but before it is imported and used by other modules? I have a need to "tag" each module with it's own module id and I've got some other use cases that would require making modifications to modules after load.

Hm. I respect the fact that the modules spec can accomodate all these cases, but it seems like we're going through an awful lot of work. Someone's going to have to write a flexloader package which does all the magic here to support imports of all the various kinds, and then it's going to have to be included on every page.

And I'm not really convinced that the asynchronous and circular-dependency cases are really taken of. As @brettz9 commented, having asynchronous translators is not uncommon at all, especially in a browser context where you become asynchronous as soon as you need to fetch a file. The LESS compiler is asynchronous, for example, because it needs to process @imports.

And node's module system, although not the best at resolving circular dependencies, at least has a clear-cut semantics for them. From the description given here, I'm not sure what happens: it seems to depend crucially on how the flexloader has mutated your legacy package and whether you using single-value export or something else. It would be nice to get more clarity on that.

Also, food for thought: it seems you can write an AMD-style module like:

Promise.map(['foo', 'bar'], System.import).spread(function(foo, bar) {
   // use foo and bar here
});

alternatively:

function define(deps, f) {
  return Promise.all(deps.map(function(d) {
    return System.import(d);
  })).then(function() {
    return f.apply(null, arguments);
  });
}

Given an appropriate polyfill for System.import (based on requirejs?), this is a pattern that could work in es5 (via es6-shim) and es6. Isn't that how I want to write my modules for the transition period?

In fact, it seems to me that this would be a much better approach for the "Importing an AMD Module With Asynchronous Dependencies" section:

function mungeAMD(loader, source) {
  var loader = new Loader();
  return loader.eval(`
    var $result;
    function define(deps, f) {
      $result  = Promise.all(deps.map(function(d) {
        return System.import(d);
      })).then(function() {
        return f.apply(null, arguments);
      });
    }
    ${source};
    $result;
  `);
}

System.link = function(source, options) {
  if (options.metadata.type === 'amd') {
    return mungeAMD(this, source).then(function(o) { return new Module(o); });
  }
}

...assuming System.link is allowed to return a Promise of a Module.

Similarly, node-style imports could be:

var fs = yield System.import('fs');
// etc

assuming an implicit module wrapper similar to:

Q.async(function*() {
  /* module body here */
});

The more I think about it, the more I like making System.import a solid building block. See https://github.com/jrburke/requirejs/issues/1028 for some more discussion.

Please sign in to comment on this gist.

Something went wrong with that request. Please try again.