Skip to content

Instantly share code, notes, and snippets.

@danbev
Last active December 21, 2015 03:09
Show Gist options
  • Save danbev/691bca06d5e0e7ff65ab to your computer and use it in GitHub Desktop.
Save danbev/691bca06d5e0e7ff65ab to your computer and use it in GitHub Desktop.
Netty SockJS

SockJS Netty Support

Netty provides support for the SockJS Protocol. SockJS is intended to provide transport protocol support for all modern browsers, including environments where the WebSocket protocol is not supported, in which case fallbacks to various browser-specific transport protocols are used.

Prior to Netty 4, Carl Byström wrote a version for Netty 3.x which was the inspiration for this support in Netty 4.

Protocol

The SockJS client API is very similar to the WebSocket API which is intentional. The idea is to hide the transport details from the end user and gracefully fallback through the supported transport protocols to find one that is supported by the browser and the server.

The client API looks like this:

var sock = new SockJS('http://localhost:port/prefix');

sock.onopen = function() {
  console.log('open');
};

sock.onmessage = function(e) {
  console.log('message', e.data);
};

sock.onclose = function() {
  console.log('close');
};

The prefix in the URL passed to the SockJS constructor is the name of the SockJS service. A single SockJS server can handle many SockJS services.

Info request

The first thing that happens in the protocol setup is that the url prefix/info is called which will return a JSON similar to this:

{"websocket":true,"cookie_needed":true,"origins":["*:*"],"entropy":2971480297}

This information is retreived before the client starts the session and is used to determine the server side support features as well as way to measure the roundtrip time between the client and the server.
Each SockJS service can configure its own setting for the information above, and additionaly configure things like the session timeout and the heartbeat frequency. For more information see the Configuration section below.

Greeting request

All services also have a greeting url which is just the simple prefix/name of the SockJS service, which can be accessed using the url http://localhost:port/echo:

Welcome to SockJS!

Session handling

Session handling is required for all protocols except WebSocket. This is to allow interactions that behave like WebSockets where we open a connection, send and recieve messages, and then later close the connection. But the other transports are using HTTP and will do a request/response style of interaction. For these protocols we provide a session handling that works with SockJS frames (OpenFrame, HeartbeatFrame, MessageFrame, and CloseFrame).

The session is always initiated by the client and the client can specify a url that looks like this:

http://localhost:port/prefix/serverId/sessionId/transport_protocol

prefix
This is the name of the SockJS service that will perform some business logic. A SockJS server can host any number of SockJS services.

serverId
This is three digit number chosen by the client. The main use of this path parameter is for sticky session to make it easier to configure load balancers.

sessionId
This is a random string that must be unique for every session. This is also generated by the client.

transport_protcol
This is the type of transport that should be used for the SockJS Protocol, see "Transport protocols" for more information regarding the supported transport protocols.

Frame Protocol

Open Frame

When the server establishes a new session an open frame must be sent to the client.

o

Heartbeat Frame

The server must send a heartbeat frame now and again (configurable) to keep the connection from breaking. The SockJS protocol says that this is typcially 25 seconds which is the default.

h

Message Frame

A message is the payload that is destined for the SockJS service. This is business logic related data. The structure is an array of JSON:

a["message"]

Close Frame

If the client ask for data on a closed connection a close frame is sent in response. It can have a status code and a status message:

c[3000,"Go away!"]

SockJSService

This is the actual business logic that will send and receive messages. The interface looks like this:

public interface SockJSService {
    SockJsConfig config();
    void onOpen(SockJsSessionContext session);
    void onMessage(String message) throws Exception;
    void onClose();
}

onOpen
Will be called when new session is created and opened. The SockJsSessionContext instance can be stored and used to send messages over the session and also to close the session.

onMessage
Will be called when a message is sent over the SockJS protocol to this service.

onClose
Will be called when the session is closed.

config
Retreives this services configuration. Every service can configure if websockets are enabled, if cookies are reqired, and also properties related to the session, like session timeout etc. See the Configuration section below for more details.

SockJSServiceFactory

A SockJSService is created by the server side. There might be cases where one would like to have an instance created for every session, or just one instance that is shared across all sessions. To accomodate this a SockJSServiceFactory implementation is what is passed into the SockJSChannelInitializer :

/**
 * A factory for creating {@link SockJSService} instances.
 */
public interface SockJSServiceFactory {

    /**
     * Creates a new instance, or reuse and existing instance, of the service.
     * Allows for either creating new instances for every session of to use
     * a single instance, whatever is appropriate for the use case.
     *
     * @return {@link SockJSService} the service instance.
     */
    SockJSService create();

    /**
     * The {@link Config} for the session itself.
     *
     * @return Config the configuration for the session.
     */
    SockJsConfig config();

}

Config

Each service can configure it's own session properties:

final Config config = Config.prefix("/echo")
                 .cookiesNeeded()
                 .websocketEnabled()
                 .heartbeatInterval(25000)
                 .sessionTimeout(5000)
                 .maxStreamingBytesSize()
                 .build();

prefix
The name of the SockJS service.

cookiesNeeded
Determines if a JSESSIONID cookie will be set. This is used by some load balancers to enable session stickyness.

disableWebsocket
Will disable WebSocket support. By default WebSocket support is enabled.

heartbeatInterval
The interval in millisecond that the server will send a heartbeat frame.

maxStreamingBytesSize
The max number of types that a streaming transport protocol should allow to be returned before closing the connection, forcing the client to reconnect. This is done so that the responseText in the XHR Object will not grow and be come an issue for the client. Instead, by forcing a reconnect the client will create a new XHR object and this can be see as a form of garbage collection.

Transport protocols

XHR Polling

When using a polling connection there is no persistent connection between the client and the server. Instead the client will issue a new requests for polling and sending data to the SockJS service.

An example of an xhr-polling request can look like this:

curl -v http://localhost:8081/echo/123/123/xhr

This is a poll request and if this is the first request with this session id, session1 above, a new session will be created on the server. The response of this, if successful, will be a 200 with a SockJS open frame. After the session has been opened, the next poll request will do a "long" poll and try to wait for data to be available on the server side.

But we also need to be able to send messages to the SockJSService which can look like this:

curl -i --header "Content-Type: application/javascript" -X POST -d '["some data"]' http://localhost:8081/echo/123/123/xhr_send

This HTTP request should contain a SockJS message frame which will be an array of JSON encoded objects. This JSON array will be decoded on the server side and each one will be delivered to the target SockJSServer by invoking its onMessage method. A successful send will returns a status of NO_CONTENT.

JSONP Polling

Jsonp polling is very similar to xhr-polling in how the connections are handled.

An example of an xhr-polling request can look like this:

curl -v http://localhost:8081/echo/123/123/jsonp?c=callback

The same session handling semantics are the play as for xhr-polling. You'll notice that in this case we have specified a query parameter named c, for callback, which expects the function name that will be used on the client side. What differs from the xhr-polling transport is the format data is sent back to the to the client when making a poll request. Instead of returning the data directly in the body of the response, the data will be wrapped in a function:

callback("o");

We also need to be able to send messages to the SockJSService which can look like this:

curl -i --header "Content-Type: application/javascript" -X POST -d '["some data"]' http://localhost:8081/echo/123/123/jsonp_send

A successful send will returns a status of OK.

XHR Streaming

With xhr streaming the requirement is that HTTP 1.1 is used. This is because support for HTTP chunking is required so that the server can hold a persistent connection open to the browser and send chunks of data.

An example of an xhr-streaming request can look like this:

curl -v http://localhost:8081/echo/123/123/xhr_streaming

The HTTP header Transfer-Encoding is set to chunked which will cause the browser to keep the connection open and not close it until it gets the last chunk (just a chunk which has a length of zero).

The data received by the client will get appended to responseText field in the XHR object. Letting this grow and grow is not a good idea, therefore the server has a configurable maxStreamingByteSize setting which when that amount of bytes have been sent a last chunk will be sent to close the connection. This will force the client to make a new request and will clear out the responseText field, a form of garbage collection if you will.

To send data you use the now familiar xhr_send:

curl -i --header "Content-Type: application/javascript" -X POST -d '["some data"]' http://localhost:8081/echo/123/123/xhr_send

EventSource

EventSource transport is an streaming transport in that is maintains a persistent connection from the server to the client over which the server can send messages. This is often refered to a Server Side Event (SSE). EventSource is like xhr-streaming a streaming protocol where a persistent connection is created on the server to the browser.

An example of an xhr-streaming request can look like this:

curl -v http://localhost:8081/echo/123/123/eventsource

To send data you use the now familiar xhr_send:

curl -i --header "Content-Type: application/javascript" -X POST -d '["some data"]' http://localhost:8081/echo/123/123/xhr_send

Much like the XHR streaming transport the EventSource transport uses the same ``maxStreamingByteSize``` to control the build of data on the client side. This is reported as required by SockJS as they have found bugs in browser:

HtmlFile

Is also a streaming transport where the the content should be placed inside an iframe. The src of the iframe will be the url to the service and should include a callback function:

curl -v http://localhost:8081/echo/123/123/htmlfile?c=callback

The response of this call will be an normal HTTP response of 200 (OK) is successful but the connection will remain open as this is a streaming transport and the server can write to the connection when updates are available. The first chuck written by the server will be a header for the iframe:

<!doctype html>
    <html><head>
      <meta http-equiv="X-UA-Compatible" content="IE=edge" />
      <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
    </head><body><h2>Don't panic!</h2>
        <script>
            document.domain = document.domain;
            var c = parent.callback;
            c.start();
            function p(d) {
                c.message(d);
            };
            window.onload = function() {
              c.stop();
            };
            // padding of white space here
            // ...
      </script>

The updated chunks will looks like this:

<script>
  p('o');
</script>

The message above is for a SockJS open frame and this string will be passed to the p function, which will then delegate to the callback specified as the c query parameter in the url to the SockJS server.

To send data you use xhr_send:

curl -i --header "Content-Type: application/javascript" -X POST -d '["some data"]' http://localhost:8081/echo/123/123/xhr_send

So for every update on the server side a new script element like this will be appended and evaluated. For this reason we need to have some sort of max limit so that this does not grow out of proportion and cause troubles on the client side. Again, maxStreamingByteSize is used for just this purpose.

WebSocket

WebSockets come in two forms in SockJS, one is where you connect using the uri pattern that we have been using so far:

https://host:port/echo/serverId/sessionId/websocket

This should be used by any SockJS clients. But it is also possible that non SockJS clients would like to interact with the SockJS server but use raw websockets. This is done but connecting using the uri:

wss://host:port/echo/websocket

This is referred to as raw websockets as the normal SockJS protocol is not in effect. This is intended for non-browser clients that might connect with a WebSocket library.

The initial request to the server will perform the HTTP upgrade request to WebSocket, and from there on the client and the server maintain a bi-directional connection on which they can both send and receive data. When the server is sent data it will extract the messages and the SockJS service will be invoked.

Testing using sockjs-protocol

Apart from the unit/functional tests there is an external test suite that is provided for SockJS. One of the changes when doing the refactoring was to use Netty's CorsHandler which was extracted and generalized. Doing this caused a few errors to be returned by sockjs-protocol 0.3.3 tests as a few of the tests CORS handling is not correct in my opinion. The following pull requests have been registered for this:

There was also an issue with parsing of http headers:

These two are included in this branch and can be used to run the sockjs-protocol testsuite.

  1. Start the Netty SockJS Server:
cd netty/codec-sockjs
mvn exec:java
  1. Run the sockjs-protocol testsuite:
cd sockjs-protocol
make test_deps (only required to be run once)
./venv/bin/python sockjs-protocol-0.3.3.py

The equivalent of the above python test suite can be found in io.netty.handler.codec.sockjs.protocol.SockJSProtocolTest.java.

Disclaminer

Currently the haproxy test in the SockJSProtocol does not pass.
The issue here is that when HAProxy tries to send an WebSocket Hixie 76 upgrade reqeust it need to be able to send the request headers, and receive the response before it sends the actualy nouce.
The test, test_haproxy, and I'm assuming HAProxy itself does not set a Content-Length header. HttpObjectDecoder in Netty does a check while decoding to see if a Content-Lenght header exists, which is our case it does not. It then does a second check to see if the request is a WebSocket request, and correctly detects that this is. Netty will then set the Content-Lenght header to 8 for Hixie 76. This causes the request to not be passed along as there is not actual body in this request.

Setting the Content-Length to 0 allows the test_haproxy test to pass successfully. I'm not sure how to fix this. I don't think that there should really be a case where the body of a Hixie 74 upgrade request does not have the nounce in the body of the request. So perhaps a workaround specific to sockjs should be put inplace.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment