Skip to content

Instantly share code, notes, and snippets.

@fasterthanlime
Last active March 14, 2019 20:44
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save fasterthanlime/f0b90626652d03c034cfdf331a2cfefd to your computer and use it in GitHub Desktop.
Save fasterthanlime/f0b90626652d03c034cfdf331a2cfefd to your computer and use it in GitHub Desktop.

nest

A proposal for a universal standard for online services for games.

Goals

  1. Games should only have to implement "online" functionalities once
  2. The same build should be uploaded on all platforms
  3. A universal layer should not be tied to a third-party central service

Goal 1 means that there shouldn't be store-specific code in a game. There can be, but there shouldn't need to be for functionality supported by uniba.

Goal 2 means that build flags / code generators etc. shouldn't be needed. Which backend is used is discovered at runtime, there's no need to package different builds of the game for different platforms.

Goal 3 means no single entity is responsible for a game's online features to stay up. Literally all stores could go out of business, and the playerbase could still provide a community backend that the game can talk to.

The library approach

One approach is for games to link against nest.dll, or nest.lib, instead of, say, fooapi.dll.

Inside the nest library is code that knows how to talk to various store APIs. From Tyler's tweet:

void unlock_achievement(string achievement_name) {
  if (enabled_platforms.foo) {
    unlock foo achievement
  }
  if (enabled_platformS.BAR) {
    unlock bar achievement
  }
  if (enabled_platforms.baz) {
    unlock baz achievement
  }
}

There are several problems with this approach.

Games ship with a specific version of the translation layer

If you released a game with nest "v0.2", and that version had janky epic support, then unless you release a new build of the game, it'll never get fixed.

Stores can never provide their own implementation of the nest API if it's statically linked. If it's dynamically linked (which Tyler says he didn't want to do), they could in theory replace the nest.dll copy with their own, or do API hooking, both of which are risky and will crash unless they get ABI compatibility exactly right.

Also, if "v0.2" only supports stores X and Y, you'll never gain support for new stores. You'll

Using a library from various languages is a lot of work

If nest ships with 20 different functions, then every "binding", for C#, Java, electron, etc., will need to write wrapper code for all 20 different functions.

Chances are, some bindings will only add a subset of these functions. Or, the developer of the binding will get some functions slightly wrong, but since they don't use those functions, they'll never notice or bother to update.

Maintaining a native library is a lot of work, too

Will library calls be synchronous? (ie. the block the thread). Will you have to use threads to use them in a non-blocking manner? How error-prone will that be?

Will you set up continuous integration infrastructure to ship Windows/Linux/macOS versions of the library to all devs? Will there be an up-to-date documentation?

Will you use MinGW or MSVC to compile the windows build? Both?

Will the code be C++? C99? C89? How many people will be working on that library?

(Note that the C++ ABI does.. not exist. Writing bindings for C++ libraries is a huge headache)

The JSON-RPC over TCP service

This approach appears "foreign" and "complicated" at first glance, but let me walk you through it.

Why JSON-RPC?

A JSON-RPC service is simply a set of requests and responses that are exchanged between peers over some connection.

Here's an example request to unlock some achievement:

--> {"jsonrpc": "2.0", "method": "Achievements.Unlock", "params": {"id": "Platinum_God"}, "id": 1}
<-- {"jsonrpc": "2.0", "result": "ok", "id": 1}

That's it. It's just lines of JSON.

There is no static or dynamic linking involved, there is no name mangling, there are no ABIs to respect, there's no FFI to figure out, it's just JSON. This works fine for almost all the features that were discussed in the twitter thread.

(All languages I know of include some form of JSON parser and serializer)

Why TCP?

TCP is overwhelmingly used to transport packets over the internet - such as HTTP traffic (up until http/2), but that's not why we're interested in it.

The game, and the store's launcher run in different processes - even with fooapi.dll, there is some IPC (inter-process communication) going on. You can't just "load another process's memory" and start calling functions.

So we need a way for two different processes to communicate. There's various options for that - named pipes are one, but they have inconsistencies across platforms, and are generally nasty to use. Shared memory is a high-performance option but it lacks many important features.

TCP is nice. You can listen on "127.0.0.1:16000" and only local processes can connect to it. The firewall is not involved at all, you just get a reliable, high speed connection between two processes. You get notified if the connection is closed (mostly, when either program closes).

(All languages mentionned above have easy access to TCP sockets)

Scenario: the launcher is nest-friendly

The foo launcher, foo.exe, launches game.exe with the environment variable NEST_ADDRESS set to 127.0.0.1:16000.

The game checks if the NEST_ADDRESS variable is set. If it is, it establishes a (local) TCP connection, sends requests to it, and receives responses from it.

Scenario: the launcher is not nest-friendly

The game is launched, and NEST_ADDRESS is not set. Either it uses a configured fallback (local files for cloud saves), or it just returns an error code letting the game know services aren't available.

Fallbacks could (and should) include compatibility layers. It's unlikely some stores will implement the API, but it is possible to implement the nest service in a library that, in turns, talks to the store API.

This takes care of Steam, for example.

How does it work for gamedevs ?

Exactly as in the library approach. There would be a C library that knows how to connect to a nest implementation, and translates C function calls to JSON-RPC requests, etc.

Gamedevs don't have to know how it actually works.

How does it work for stores ?

Minimally, stores don't need to do anything. Steam probably never will. That's why compatibility layers are important.

When a store wants to support nest, all they have to do is implement the service within their launcher, and specify the environment variable when they launch games.

This instantly makes all games using nest compatible with their backend, so it makes it easier for gamedevs to release on their store.

What about synchronous/asynchronous calls ?

With a design like that, there is no temptation to make some calls synchronous. Everything has to be asynchronous by design (since you send a request, then don't know the result until you get a response).

However, client libraries could provide blocking variants of any endpoints (by issuing the request in another thread and waiting to be notified of the response). This is generally a bad idea (you want game UIs to remain responsive), but it's possible.

Who will make all these client libraries ? Who will write the documentation ?

The great thing about a JSON-RPC service, is that you can write a specification once, and then generate libraries for any language/runtime from that.

The specification would be a single file, that lists all endpoints, with parameter types, whether they're optional or not, what the request/response/notification flow is for all endpoints, etc. - and that file is parsed, and code is generated from it: C# code, Java code, JavaScript code, and even documentation in text format, HTML format, PDF format.

(This is also technically possible from a C API, it's just much harder to do.)

The only bits that need to be handwritten for the C#/Java/JavaScript client libraries are the TCP socket / JSON serialization bits, ie. the JSON-RPC standard. This is very straight-forward compared to virtually every other RPC system out there.

Are there any other advantages to a JSON-RPC service vs a library?

Yes, many!

  • Debugging is extremely easy. You can just have a debugging proxy between your game and the launcher, and see everything that passes through, even modify it. It's just lines of JSON! (Think Chrome DevTools but for API calls).
  • Implementing only parts of the API is easy. Just reply to the requests you know about, and return an error for the others (which JSON-RPC implementations do by default). That way, the game can use feature X and Y on store foo, but knows to only use X on store bar, because it doesn't support Y. (There is no, for example, crash because you're loading a library with missing symbols.)
  • It forces you to design an API that is friendly for all languages. You can't use C structs and pointers. You can have resources with identifiers (numbers or strings). It also forces you to design for asynchronous calls, which, for the features we're talking about, is almost all of them.

How fast is it?

Really fast. (Try pinging localhost).

More importantly, it's a tiny tiny tiny fraction of the time it will actually take the launcher to contact the actual, somewhere-on-the-internet backend. It's negligible.

Has this been proven to work?

Yes! https://itch.io/app and https://github.com/itchio/butler talk to each other with JSON-RPC over TCP.

The app is an electron app, and butler is a Golang executable. They're not linked against each other, they just chat over TCP. The app can upgrade butler independently, and get new features / bugfixes without itself being updated.

The same code is used to generate documentation: http://docs.itch.ovh/butlerd/master/#/, Golang boilerplate, and TypeScript bindings, see https://github.com/itchio/itch/blob/master/src/common/butlerd/messages.ts and https://github.com/itchio/node-butlerd

So, back up a bit, what happens if store foo doesn't support nest?

Then:

  • the game starts up, ask "libcppnest" to initialize
  • "libcppnest" notices that no nest implementation has advertised itself
  • "libcppnest" goes through a list of registered fallbacks
  • oh, there's a compatibility layer for "foo", let's start it up
    • the foo compat layer listens on "127.0.0.1:23000"
  • "libcppnest" connects to "127.0.0.1:23000", works as usual, and will shut down the fallback when the game closes

Again, the game (or gamedev) doesn't need to know about any of that. Their only interaction is with "libcppnest", which is just a regular C++ library they can statically link with.

Ok, so compatibility layers implement nest like stores would implement nest?

Yes, they listen on a local TCP address, tell you which one it is, and then handle requests.

Mhh who will take care of coordinating all of this

Hopefully a few people, but I'm (amos) willing to put in a good amount of work to get it started.

What would be the first steps?

We take the simplest feature, maybe just "get me the current player's username", and we decide what the endpoint looks like.

For example, we could decide it's named Session.GetUserDetails, and that the result could be either:

  • Error code 1000 if there is no user (game is launched anonymously or without a launcher)
  • A response in that format:
{
  "user": {
    "username": "fasterthanlime",
    "displayName": "Amos",
    "platform": "itch",
    "itch": {
        "id": 298789,
    }
  }
}

We decide on the format for the "unique specification file", with a clear example containing that endpoint.

We implement a "test server" that responds only to the Session.GetUserDetails endpoint. It doesn't matter which language this is coded in, it could be C, or Go, or Rust.

We get started on client libraries: I'd happily take electron/Go/Rust, and we test them against that test server. The parts we need to write by hand are opening TCP sockets, serializing requests to JSON, listening for responses and dispatching them properly. Each library will include some code that is generated from the single specification file.

(Again, all of this happens locally, on the same machine, it can be run offline).

Code generators read that single specification file, and, for C, they might generate code like:

typedef user struct {
  username *char;
  display_name *char;
  // etc.
};

typedef session_get_user_details_response struct {
  user *user;
};

// no params needed for this endpoint
int nest_conn_session_get_user_details(conn *nest_conn, res **session_get_user_details_response) { 
  // serialize parameters if needed
  // send request
  // wait for response or error (assuming this is a synchronous variant)
  if (success) {
    *res = // alloc response
    // fill fields
    return 0; // success!
  } else {
    return SOME_ERROR_CODE;
  }
}

// example usage:
void do_stuff() {
  session_get_user_details_response *user_details;
  int ret = nest_conn_session_get_user_details(conn, &user_details);
  if (ret == 0) {
    printf("our player's name is '%s'", user_details->user->username);
  } else {
    printf("could not get user details: %s\n", nest_error_string(err));
  }
}

C is really annoying (since it doesn't have a good string type, memory management is manual, there's no async facilities, etc), which actually helps with my point: this code is better generated, not handwritten. If the generator is good, the whole library will be good.

Generating code for garbage-collected languages like C#, Java, JavaScript, Go, will be much easier.

(This is a very quick and dirty overview of what would happen - I just wanted to show that it's not just a big nice theory, there's a full plan here).

This still seems confusing / I still have some questions

That's normal, there are a few moving parts here :)

However, doing it this way enforces clean design in a bunch of ways:

  • No C/C++ specific language features - all languages must be able to speak nest, so only types like objects, arrays, numbers, strings
  • All requests are asynchronous by default (you can wrap them to be synchronous)
  • Games have to deal gracefully with unimplemented features. To help with that, API calls may be organized in "feature groups" (the "achievements" group, the "cloud saves" group, etc.)
  • All client libraries are always up-to-date, because the "plumbing" (opening TCP sockets, writing/parsing JSON) is clearly separated from the "protocol" (generated from the single file specification).

I'm happy to answer any additional questions!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment