Skip to content

Instantly share code, notes, and snippets.

@tjanczuk

tjanczuk/blog.md Secret

Last active August 29, 2015 14:01
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save tjanczuk/b849e7b0aa7775a72020 to your computer and use it in GitHub Desktop.
Save tjanczuk/b849e7b0aa7775a72020 to your computer and use it in GitHub Desktop.
Node.js for .NET developers

Node.js for .NET developers - Part 1: Getting Started

If you are a .NET developer and want to start using or just explore Node.js, in this post I will provide you with initial guidance. I will make a practical comparison of .NET and Node.js, focusing on similarities and differences to provide you with a frame of reference for further learning. The Part 2 of this blog post will introduce Node.js frameworks, tools, and coding practices.

Overview

Node.js is a server side programming framework based on JavaScript. It is open source and cross-platform (runs on Windows, MacOS, Linux, and Solaris). The project was started by Ryan Dahl in 2009 and has quickly grown in popularity. Just like .NET framework, Ruby on Rails, or Python, it provides rich enough ecosystem of functionality to support creating modern web applications. Unlike .NET Framework, Node.js focuses primarily on IO-bound workloads (e.g. web servers, orchestration engines) and is rarely used for CPU intensive applications (e.g. image processing, complex computation).

Given that web programming is the primary scenario focus of Node.js, the Hello, world samples usually take the form of a simple HTTP server:

var http = require('http');

http.createServer(function (req, res) {
    res.writeHead(200, { 'Content-type': 'text/plain' });
    res.end('Hello, world!');
}).listen(8080);

Subsequent sections will focus on key aspects of Node.js to compare and contrast them with .NET framework.

High level architecture

One of the key differences between .NET/CLR and Node.js is in the underlying architecture. CLR is a multi-threaded runtime, while Node.js executes application code on a single thread within a process. Work scheduling in CLR focuses on assigning CPU time to threads, taking into account thread-blocking operations, and is abstracted away from the developer for the most part. In contrast, Node.js code execution is driven by a central, single-threaded event loop which executes non-blocking event handlers on that thread. Event handlers implemented by the application must be non-blocking. If they perform CPU blocking operation or computation, subsequent events will starve. For example, if you made a blocking API call within the message handler of the Hello, world HTTP server above, your web server application would be unresponsive to HTTP requests that arrived while the computation is taking place. Developers must be much more aware of work scheduling in a Node.js application than in a .NET application.

Impact on APIs

The most visible manifestation of this design difference is in the shape of the APIs .NET and Node.js offer. While .NET usually provides both blocking and asynchronous (non-blocking) API versions for long running IO operations, Node.js offers almost exclusively asynchronous APIs. For example, reading a file in .NET may be implemented in a blocking fashion with:

var contents = File.ReadAllText("myfile.txt");

or in a non-blocking fashion using TPL and async\await (or more traditional async APIs in .NET):

var contents = await File.OpenText("myfile.txt").ReadToEndAsync();

In contrast, Node.js only offers an asynchronous API for reading files:

fs.readFile("myfile.txt", function (error, content) {
    // ... do something with content
});

The example above demonstrates a pattern asynchronous APIs in Node.js often follow: the fs.readFile API starts an asynchronous IO operation of reading a file, and invokes the callback function specified as the second parameter only when that operation completes. At that point the content of the file is already in memory and can be provided to the application without blocking.

(Note that there actually is a blocking version of the file reading API in Node; however, its use in practice is limited to narrow scenarios that do not require the process to remain responsive to other events).

Impact on scalability

The second important implication of the key architectural difference between .NET/CLR and Node.js is in how applications scale out.

When you write a .NET web application, you typically don't have to do anything special about scalability until your application cannot be accommodated by a single server VM. CLR's multi-threaded scheduler knows how to fully leverage multi-core CPUs of modern servers, allowing code in your application to execute in parallel on multiple cores of a single machine.

In contrast, Node.js is running application code on a single thread per process, and therefore cannot fully leverage multi-core CPUs using just one process. A Node.js developer faces the scalability problem much earlier than a .NET developer: as soon as the capacity of a single CPU core is insufficient to handle the application logic. Fortunately Node.js makes it easy to scale out your application to multi-core CPUs using the built-in cluster module, optionally with the added management features of strong-cluster-control. Using cluster, with a few extra lines of code you can spawn multiple child processes, each running the same application code. If your application has a network listener (e.g. an HTTP server), all spawned processes will be listening on the same network port, and the operating system kernel will load balance new connections across the spawned processes. That way, a Node.js application can fully leverage the CPU power of a multi-core, multi-CPU server. The strong-cluster-control module by StrongLoop enhances cluster with the ability to monitor child processes, throttle their restarts in case of frequent failures and control the cluster through an API or command line tool.

The fact that a Node.js developer must design for scalability much earlier than a .NET developer has another advantage. Designing for scalability early usually results in cleaner application architecture. Many of the mechanisms a developer would use to scale out to multi-core servers would be used verbatim to scale out to a multi-server web farm. For example, scaling out to multi-core servers often requires a mechanism to externalize application state, something a similar .NET application can simply accomplish with shared in-memory state (with complexity that usually manifests itself only later in the form of race conditions and deadlocks).

Installation

.NET Framework comes pre-installed on modern Windows operating systems. As long as the version of the framework matches your needs, your .NET application can immediately run on those systems.

In contrast, Node.js must be explicitly installed. Fortunately, Node.js has a much smaller and portable runtime than .NET Framework. The core Node.js runtime on Windows is a self-contained executable less than 7MB in size (as of Node.js version v0.10.28). Given that, it is not unreasonable to re-distribute the Node.js runtime along with your application, especially if you have a dependency on a particular version.

Modules

A .NET developer has access to functionality built into the .NET Framework itself as well as third party .NET components distributed as NuGet packages. In Node.js, consistent units of functionality are packaged as modules, which are then imported into the application using the require function.

A rich set of Node.js modules is built into the Node.js runtime itself, and part of the single executable file representing the runtime (node.exe on Windows). The built-in modules range in functionality from file system operations, to process management, cryptography, clustering, to creating TCP, HTTP, and UDP clients and servers. In essence, one can already get a decent mileage out of the built-in modules. You can get an idea of the functional span of the built-in modules by looking at Node.js API documentation.

One of the key strengths of the Node.js ecosystem however is in the thriving open source community providing Node.js extensions. The central repository of these extensions is npm. As of this writing, there are over 70k modules on npm (compare this to over 20k NuGet packages for .NET Framework). The span of functionality offered by npm modules can only be characterized as everything under the Sun: you will find modules that allow communication with most SQL and NoSQL storage solutions, messaging systems, MVC and REST frameworks, templating engines, protocol implementations, etc. Although this is not a requirement, the de-facto pattern Node.js community follows is that a module published on npm also hosts its source code on GitHub.

npm modules are versioned using the semver versioning semantics. Following this semantics is important to allow the system to choose the correct version of a module during automatic module installation (see configuration).

Using npm modules in your Node.js application requires them to be installed, similarly to installing NuGet packages for a .NET application. Every Node.js distribution or build contains an npm package manager. This is a command line npm client application (written in Node.js itself) that allows you to search, install, manage, and publish npm modules.

Installing a Node.js module is simple. Let's use the ws module to implement a simple WebSocket server in Node.js. You will also notice that the source code of the module along with its documentation are on GitHub at einaros/ws, which is the pattern followed by vast majority of Node.js modules. First, install the module:

npm install ws

Notice that a node_modules\ws directory was created on disk. This is where the module code is stored and where the Node.js runtime will load it from when the application runs.

At this point you can import the ws module using the require function within your Node.js application and implement your WebSocket server using APIs it offers:

var ws = require('ws');

var wss = new ws.Server({ port: 8080 });
wss.on('connection', function (connection) {
    connection.on('message', function (message) {
        connection.send(message.toUpperCase());
    });
    connection.send('Hello!');
});

Notice how the result of the require function is assigned to the ws variable. This mechanism allows you to create your own namespace for the APIs exposed by the ws module by making them available as functions and properties of an object whose name your application controls. This mechanism is similar to using alias directives in .NET, except it is much more commonly used in Node.js applications and modules.

Configuration

All but the simplest Node.js applications and modules contain a configuration file called package.json. The primary purpose of this file is to specify the name of the package, its version, and to declare dependencies: other Node.js modules that a particular module requires to function correctly. These aspects of the package.json are similar to the manifest embedded in .NET assemblies. Other roles this file plays include specifying custom post-installation steps, ways of running tests, or development time dependencies. The full specification of the package.json file explains the capabilities in detail.

{
  "name": "myapp",
  "version": "0.1.0",
  "dependencies": {
    "express": "0.4.2",
    "mongodb": "> 1.2"
  }
}

The minimalist package.json file above specifies myname as the application name, sets the version number to 0.1.0, and declares that the application depends on the express and mongodb npm modules. Module dependencies can require a very specific version of a module (e.g. 0.4.2) or provide a more relaxed version constraint (e.g. > 1.2). In general applications should be as specific as possible in declaring module versions. When comparing module versions, the semver semantics applies.

The package.json file is read and interpreted not by the application itself, but by the npm client tool than manages npm modules. In particular, calling npm install in a directory that contains a package.json file will ensure that all modules the component depends on are available in the environment; missing modules will be obtained from the npm repository. Running npm install is part of a typical workflow of deploying a Node.js application.

Unlike the .NET configuration file, package.json usually does not specify any application settings. Node.js application settings are commonly passed to the application through environment variables. Some applications introduce their own, file-based mechanisms of specifying settings. Typically these files use JSON or YAML formats, which are easy to parse in JavaScript and easier to work with for a human than XML.

In Part 2 of this blog I will explore frameworks, tools, hosting, and coding practices of Node.js.

Node.js for .NET developers - Part 2: frameworks, tools, and beyond

The Part 1 of this blog post provided an introduction to Node.js for .NET developers. In this part I am going to discuss common Node.js frameworks, tools, hosting technologies, and coding practices.

Frameworks

The primary scenario for Node.js is web development. The Node.js runtime itself contains a full HTTP stack implementation. There is a large number of npm modules that provide higher level web application capabilities on top of that HTTP stack: MVC frameworks, REST frameworks, authentication plugins, WebSocket implementations, etc. A good way to discover them and get the idea of the most popular ones is to search the npm registry.

One of the most popular MVC frameworks for Node.js is express. Express supports flexible request routing mechanism, contains an extensible processing pipeline with a variety of available middleware, and supports several view rendering engines. A substantial ecosystem of middleware modules compatible with express exists on npm, from authentication (e.g. passport) to WebSockets (e.g. socket.io), and more. Express supports a variety of view rendering engines. Some of the popular ones are Jade and EJS. If you have been using the ASP.NET templating syntax, you will likely find EJS very easy to transition to. This is how a simple Express application looks like:

var express = require('express');
var app = express();

app.get('/:name', function(req, res) {
  res.render('hello', { name: req.params.name, date: new Date() });
});

app.listen(3000);

The hello.ejs view rendered from the single route controller above would look like this:

<html>
<head>
    <title>EJS scample</title>
</head>
<body>
    <h1>Hello, <%= name %></h1>
    <p>The date is <%= date %></p>
</body>
</html>

If you are developing an HTTP API application and need a framework that is more data-centric than MVC-centric, Node also provides you with a wide selection of modules. One of the most popular ones is restify. Similarly to express it supports an extensible middleware pipeline, but out of the box provides many of the features specifically targeting HTTP APIs: CORS, GZip, JSONP, as well as parsing of the body and key HTTP headers. While it does not support WebSockets itself, it composes well with other modules that do (like socket.io).

How do you choose between express and restify when coming from .NET? A rule of thumb is this: express is to restrify what ASP.NET MVC is to ASP.NET Web API.

No discussion of Node.js frameworks would be complete without mention of WebSockets and real-time web, as this was originally one of the focus scenarios for Node.js. While many options exist to add real time communication capabilities to your Node.js web apps, the undisputed leader is socket.io. Socket.io provides an abstraction of real-time duplex communication between a web client and web server. it also provides several mechanisms that implement that abstraction: WebSockets, HTTP long polling, Flash sockets, iframe, and JSONP polling. Client and server can negotiate a mechanism to use and support graceful degradation.

The functional equivalent of socket.io in .NET is ASP.NET SignalR. In fact, the SignalR project in .NET was largely inspired by socket.io and created as a .NET answer to it.

Distributions

As of this writing, the npm repository contains over 70k modules, with several modules addressing any given scenario. As the ecosystem evolves in a completely decentralized way, there is a lot of overlap between modules. Some modules are actively supported while others become deprecated and decay over time. This situation created a very low barrier to entry for developers willing to contribute to the ecosystem and was one of the primary factors behind fast and innovative growth of the platform.

However, from the perspective of Node.js users who want to build applications on top of the platform, there is certainly a need for driving a level of consistency and prescriptive guidance within the Node.js ecosystem. Current situation in many ways resembles the early days of Linux. Just as RedHat, Ubuntu, Fedora, or Susie provided consistency, predictability, and support for Linux, a similar need for distributions of Node.js exists.

One attempt at providing a level of prescriptive guidance and consistency coupled with support are the LoopBack and StrongOps products from StrongLoop. Givent the continued rapid growth of the Node.js ecosystem, similar "distributions" of Node.js are sure to be created going forward.

LoopBack provides a framework and tools for creating 3-tier web API applications. It builds on top of the express MVC framework, provides SDKs for major mobile platforms and HTML5, as well as bindings and ORM models for several SQL and NoSQL backend solutions. LoopBack comes with a command line tool supporting common development time activities, from scaffolding to running, scaling, and debugging a Node.js application on the developer machine. Particularly useful is the built-in debugging capability based on the node-inspector module. It allows debugging Node.js applications in a Chrome browser with a similar experience that Chrome offers for debugging client side JavaScript.

LoopBack is complemented by StrongOps which provides tools to scale, monitor, manage, and diagnose a Node.js application once deployed. One can identify performance issues, scale out the application at runtime, and analyze memory consumption, among other things.

Creating Node.js distributions out of the many Node.js modules is a natural next step in the evolution of the Node.js ecosystem.

Hosting

There are substantial differences in the structure of the HTTP stack of .NET and Node.js web applications. .NET web applications are using HTTP stack that builds on top of the kernel mode HTTP implementation provided by the operating system within the HTTP.SYS component. In contrast, Node.js applications listen directly on TCP ports, and Node.js runtime provides HTTP protocol implementation. This difference has important implications for how Node.js applications are hosted.

Generally speaking, Node.js applications are typically running as a stand-alone executable listening directly on TCP, unlike ASP.NET applications that are hosted in IIS. Functions related to process management, activation, and fault tolerance are provided by platform specific components outside of Node.js.

Running Node.js applications as standalone processes started manually is the common practice during application development. Combined with simple Node.js-based process management utilities like supervisor this model provides great flexibility for a developer: the application is automatically recycled when any of the source files changes. Combined with lack of compilation step in JavaScript, it results in a very agile development environment.

When running Node.js applications in production on Linux systems, one typically uses platform specific daemon technologies. For example, when hosting Node.js applications on Ubuntu, it is not uncommon to use Upstart to provide activation, recycling, and process management.

When hosting production Node.js web applications on Windows, one has two options. First one is to create a Windows Service around the Node.js application. One of the tools that helps streamline this approach is NSSM. Another option is to host Node.js applications in IIS using iisnode. In the iisnode model, Node.js applications are running within IIS in a similar way that FastCGI applications would, except the protocol iisnode uses to communicate with Node.js processes is HTTP over named pipes as opposed to FastCGI. From the perspective of the Node.js application it provides full fidelity to establishing a stand alone TCP listener. You can read more about the features provided by iisnode that differentiate it from self-hosting. Before you decide on a specific way of hosting for your app on Windows, you should also understand the performance implications of using iisnode in various scenarios. The iisnode technology is used by Windows Azure Web Sites and several other hosting providers for hosting Node.js applications on Windows.

Tooling

While there are several good environments and tools for developing Node.js applications, generally speaking .NET developers used to the Visual Studio experience of developing .NET applications will need to lower their expectations.

The Node.js developer community at large tends to favor simple yet cross-platform tools. There are several text editors in use for Node.js development, with the most widely used being Sublime Text and WebStorm. In its basic form Sublime provides simple directory-based project management and syntax highlighting for JavaScript. Several extensions exists that support more advanced features, but altogether the story falls way short of Visual Studio support for writing .NET apps. Developers using .NET development tools, in particular Resharper, may find WebStorm providing similar experiences.

The best visual debugger for Node.js is node-inspector. It allows Node.js applications to be debugged remotely from the browser, which has a cross-platform advantage over the somehow brittle remote debugging story in Visual Studio. However, node-inspector is not integrated with the Node.js runtime, and setting it up requires certain effort on developer's part. That is why the "printf" style of debugging still plays an important role in the developer's toolset, especially for quick ad-hoc debugging tasks. Two environments that provide integrated node-inspector support are iisnode and StrongOps.

If you are using Windows for development of your Node.js applications, you should most definitely check out the excellent and free Visual Studio Tools for Node.js. This Visual Studio add-on raises the bar for Node.js tools by providing many of the features .NET developers are used to when using Visual Studio, for example integrated local and remote debugging (even for apps deployed to non-Windows platforms) and intellisense.

The last note here is about transpilation tools. Use of JavaScript on the server often generates extreme reactions (interestingly, many people like JavaScript for the exact same reasons others dislike it). A few attempts were made to address some of the perceived issues with JavaScript through transpilation of a different syntax to JavaScript. One tool from this space is CoffeeScript which aspires to simplify the syntax and emphasize the "good parts" of JavaScript. Another is TypeScript which adds strong typing and type constraints to otherwise untyped JavaScript. The price one pays for the benefits of transpilation is an extra compilation step in the development and deployment workflow, which is a drawback compared to the simplicity of change-save-run workflow that Node.js developers are used to. Transpilation also implies the need to learn a new syntax. TypeScript offers a gentler learning slope than CoffeeScript in that respect by allowing gradual transition from a JavaScript-only code base: any valid JavaScript is also a valid TypeScript.

Coding patterns and practices

Node.js, being single-threaded, asynchronous, and based on loosely typed JavaScript, has distinctly different coding patterns, esthetics, and practices compared to strongly typed .NET.

Although both JavaScript and .NET combine elements of functional and object oriented programming, Node.js embraces functional programming to a much greater extent than .NET. In fact, in the entire Node.js runtime there is just a handful of classes which capture quintessential concepts of Node.js (e.g. EventEmitter or Stream). Vast majority of APIs in Node.js is grouped into modules exposing functions. And since functions in JavaScript are values, composability in Node.js often relies on passing functions, frequently implemented as closures over other state, as parameters to other functions. The flagship example is the async pattern in Node.js where by convention an async API accepts a callback function as the last parameter, e.g.:

function startAsyncOperationFoo(parameter1, parameter2, callback) {
    // start async operation, and when it completes, invoke the callback:
    var error = null;
    var result = { a: 12, b: 'foo' };
    callback(error, result);
}

var someState = 7;

startAsyncOperationFoo('abc', 'def', function (error, result) {
    if (error) throw error;
    someState += result.a;
})

The example above demonstrates the basic async calling convention in Node.js:

  • asynchronous functions accept their parameters followed by a callback function,
  • a callback function accepts an error as the first parameter and results of the async operation as subsequent parameters,
  • a callback function should check for the error and only attempt to process results if no error occurred,
  • a callback function is frequently implemented as a closure over some non-local state (e.g. someState above),
  • a callback function frequently starts another async operation.

A frequent side effect of a naive application of this pattern for more complex logic results in programs that tend to grow horizontally faster than vertically. Consider a case where we want to calculate the result of (5 + 7) * 4 / 3 given asynchronous functions add, multiply, and divide:

add(5, 7, function (error, result) {
    if (error) throw error;
    multiply(result, 4, function (error, result) {
        if (error) throw error;
        divide(result, 3, function (error, result) {
            if (error) throw error;
            console.log(result);
        });
    });
});

The async module remedies this situation by helping developers "flatten" a number of popular asynchronous workflows into more concise JavaScript code. Using the async module, the same computation would take this form:

async.waterfall([
    function (callback) {
        add(5, 6, callback);
    },
    function (result, callback) {
        multiply(result, 4, callback);
    },
    function (result, callback) {
        divide(result, 3, callback);
    }
], function (error, result) {
    if (error) throw error;
    console.log(result);
});

Another approach to help developers organize asynchronous code is to use promises and most notably Bluebird. If you have been using the Task Programming Model in .NET, you will feel right at home with JavaScript promises.

The upcoming (as of this writing) release of Node.js 0.12 will support the new JavaScript language feature of generators, which introduce new language syntax and semantics similar to CLR's async/await pattern in TPL. It allows writing asynchronous code in a synchronous fashion, withouth blocking the thread on which code executes. Read more about generators in this blog post.

Another extremely popular utility module used in Node.js development is underscore. It provides some 80 functions that facilitate working with collections, arrays, objects, and functions. Conceptually it is similar to extension methods in .NET, except they have a distinctly functional programming twist.

When Node.js is not enough

There are applications for which Node.js is not a good fit and other technologies must be employed.

Given the single-threaded, event-based architecture of Node.js, its primary area of application and strength is in IO-bound workloads: programs that manage a large number of concurrent asynchronous IO operations. There is a large class of such applications:

  • most web applications which accept HTTP requests from clients, execute a transaction in a database, and return HTTP result to the client,
  • orchestration engines, which manage state transitions and initiate asynchronous actions of a long running state machine,
  • a variety of networking applications, which process asynchronous IO events without engaging inordinate amounts of CPU cycles; UDP and TCP servers, mail servers, proxy implementations, gateways, etc.

Node.js is not a good fit for CPU-bound workloads: programs that perform relatively little IO but instead engage in CPU heavy computations. Some examples include image processing, weather prediction, sorting, or any other kind of heavy algorithmic processing. There are two reasons Node.js is not a technology of choice for such applications:

  • performing long running CPU-bound operations in Node.js would stop processing of events by the single-threaded event loop,
  • since Node.js primarily targets asynchronous IO workloads, the vast majority of APIs in the Node.js ecosystem is asynchronous, and there are very few modules that address non-IO functionality.

CPU-bound bound tasks that are performed with ease in a .NET application require an extra effort in Node.js. First, a technology other than Node.js must be used to complete the CPU-bound part of the workload. Second, the application must either run that workload as a separate process, or require a complex, multi-threaded native extension to Node.js to be created. The most popular approach to this problem is to create a child process using the child_process module of Node.js, and have the child process (likely based on a different technology) perform the CPU-bound computation and return results to Node.js via an IPC mechanism. For example, you can envision a web application in Node.js that accepts a JPEG file from the client, then creates a child process implemented in Java, C#, or Python that resizes the picture and returns the file name of the resized artifact via IPC back to Node.js, which in turn sends it back to the client as a HTTP response. It is worth noting that the child process functionality is getting a face lift in the upcoming release of Node.js v0.12, with the long awaited support for synchronous execution of child processes (execSync), which is useful in scripting scenarios utilizing Node.js.

A different approach to the problem of running CPU-bound logic in Node.js applications is to use the edge.js module. Edge.js allows running C# code inside of a Node.js application by hosting CLR in the Node.js process and providing an interop model between Node.js and CLR. It works on Windows, MacOS, and Linux (using Mono on non-Windows platforms). Edge.js allows you to leverage .NET Framewok functionality in your Node.js application without paying the performance and complexity price of cross-process communication and child process management. Edge is also useful in leveraging pre-existing .NET components, which is a frequent situation in a non-greenfield application development.

@bajtos
Copy link

bajtos commented May 22, 2014

The article is very comprehensive. I really like the subtle way of referencing StrongLoop products and modules.

Here are things that can be improved:

  1. npm is spelled all lower-case, developers of npm use to make jokes of people using upper-case spelling.

  2. I would recommend Primus instead of socket.io for real-time communication. The development of socket.io is stalled, the bugs are not fixed, the upcoming 1.x version seems to get never completed.

    Socket.io applications can be scaled out as the session state can be externalized using plugins. The default plugin is based on Redis.

    The implementation of session state synchronization in socket.io is broken, there is a fundamental flaw in the store API design: issue#952, issue#1244.

    Primus recommends to use a load balancer with sticky sessions: README.

  3. If you have been using the Razor syntax in ASP.NET MVC, you will likely find EJS very easy to transition to.

    IMO the syntax used by EJS is much closer to the old ASP.NET syntax than to the new Razor syntax.

    There are several attempts on implementing Razor templates in javascript (search results) with no obvious winner so far.

  4. One can identify performance issues thanks to deep integration with DTrace

    I don't think we have that implemented yet. AFAIK StrongOps is monkey patching the javascript code to inject profiling bits.

  5. Tooling: another popular IDE is WebStorm. Developers using Resharper may find it very attractive, as it is similar or identical in many aspects.

  6. there is no good debugging story that allows debugging at the level of the pre-transpilation syntax

    That's not true. Most transpilers can generate a sourcemap file, which can be used by debuggers like Node Inspector to debug the original coffescript/typescript code (details).

  7. Coding patterns and practices:

    • The list of solutions to callback hell should mention promises and Bluebird.
    • You may want to mention generators, which allow developers to write in async/await style. Note it is an upcoming feature. Even with Node 0.12, you have to run node with a flag to enable it. More details can be found e.g. in our blog post.

@tjanczuk
Copy link
Author

tjanczuk commented Jun 3, 2014

npm is spelled all lower-case, developers of npm use to make jokes of people using upper-case spelling.

Fixed

@tjanczuk
Copy link
Author

tjanczuk commented Jun 3, 2014

I would recommend Primus instead of socket.io for real-time communication. The development of socket.io is stalled, the bugs are not fixed, the upcoming 1.x version seems to get never completed.

Looks like socket.io 1.0 just shipped.

@tjanczuk
Copy link
Author

tjanczuk commented Jun 3, 2014

Socket.io applications can be scaled out as the session state can be externalized using plugins. The default plugin is based on Redis.

The implementation of session state synchronization in socket.io is broken, there is a fundamental flaw in the store API design: issue#952, issue#1244.

Removed reference to redis.

@tjanczuk
Copy link
Author

tjanczuk commented Jun 3, 2014

If you have been using the Razor syntax in ASP.NET MVC, you will likely find EJS very easy to transition to.

IMO the syntax used by EJS is much closer to the old ASP.NET syntax than to the new Razor syntax.

Agreed. Removed reference to Razor.

@tjanczuk
Copy link
Author

tjanczuk commented Jun 3, 2014

One can identify performance issues thanks to deep integration with DTrace

I don't think we have that implemented yet. AFAIK StrongOps is monkey patching the javascript code to inject profiling bits.

Thanks. Jerry indicated the same. Removed reference to DTrace.

@tjanczuk
Copy link
Author

tjanczuk commented Jun 3, 2014

Tooling: another popular IDE is WebStorm. Developers using Resharper may find it very attractive, as it is similar or identical in many aspects.

Thanks. Added WebStorm.

@tjanczuk
Copy link
Author

tjanczuk commented Jun 3, 2014

there is no good debugging story that allows debugging at the level of the pre-transpilation syntax

That's not true. Most transpilers can generate a sourcemap file, which can be used by debuggers like Node Inspector to debug the original coffescript/typescript code (details).

Agreed. Changed the wording around this.

@tjanczuk
Copy link
Author

tjanczuk commented Jun 3, 2014

Coding patterns and practices:

The list of solutions to callback hell should mention promises and Bluebird.

You may want to mention generators, which allow developers to write in async/await style. Note it is an upcoming feature. Even with Node 0.12, you have to run node with a flag to enable it. More details can be found e.g. in our blog post.

Done. Mentioned all.

@tjanczuk
Copy link
Author

tjanczuk commented Jun 3, 2014

I have split the post into Part 1 (getting started, installation, configuration), and Part 2 (frameworks, tools, hosting, coding practices). You need to update the hyperlinks in Part 1 and Part 2 links in each section to cross-references the two blog posts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment