Skip to content

Instantly share code, notes, and snippets.

@tiffany352
Last active December 18, 2015 21:19
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save tiffany352/5846156 to your computer and use it in GitHub Desktop.
Save tiffany352/5846156 to your computer and use it in GitHub Desktop.
IntenseLogic Ideas

Asset streaming would require placeholder assets, as well as an ability to reload assets as needed. The placeholder would be returned, and then the real asset would be downloaded in the background. Once the asset has completed downloading, it would be refreshed, returning the real asset.

The input library would not deal with input itself, that would be provided by a backend, the glfw backend would reside in libilgraphics. This way, graphics libraries would be contained to graphics, but still be able to provide input.

Its purpose would be to translate keypresses and mouse movements into actual actions, and then to fire events corresponding to these actions on appropriate objects. It would be highly programmable, allowing various types of sequences and controlling devices. A particular usecase would be changing how a joystick is mapped, allowing it to be used either as a 2D input or as translation/rotation through 3D space.

Multi-monitor support would consist of a series of shell contexts, one for each monitor, that share with the primary rendering context. The primary context would render at a size sufficient to contain all of the monitors, and they would be split up accordingly. In the future, there could also be multi-monitor using separate contexts, to allow for multi-gpu rendering.

This would all be done through a configuration file with some options, one for each graphics card:

  • On/Off - Whether or not to create the shell contexts
  • Monitors to use - A list of monitors that will be used for rendering as well as an optional per-monitor video mode, with a keyword of "all" permitted

Modify libusrsctp to work async: this requires adding a run-time flag to not start the read thread, as well as abstracting out the contents of the receive loop into a handler function which may be called after a read/write event occurs on the internal UDP file descriptor.

  • SCTP_BASE_VAR(userspace_rawsctp) is the fd for SCTP/IPv4
  • SCTP_BASE_VAR(userspace_rawsctp6) is the fd for SCTP/IPv6
  • SCTP_BASE_VAR(userspace_udpsctp) is the fd for SCTP/UDP/IPv4
  • SCTP_BASE_VAR(userspace_udpsctp6) is the fd for SCTP/UDP/IPv6
  • Might require additional work of unknown nature in the code to do lost packet resends
  • Writing is fine

The wire format for the event passing protocol will be: flags:8 channel:8 event:16 [object:32] [type:32]

  • little endian (allows copy-out in optimal scenarios)
  • flags
    • type: the object type this packet is to be broadcasted to
    • object: the object instance this packet is to be broadcasted to
    • type and object must not both be set, the type of object must be known ahead of time; the object namespace is flat
  • channel: default channel 0 is for connection management only, all others may be registered with il.net.channel, these correlate to ilE_registry's
  • event: the id of the event in this channel

State logs that can be diffed and rebased, which will be used to make server-sided movements with client-side prediction with as little jumping as possible. The less jumping you want, the more the client needs to simulate the world. Ping issues will unavoidably cause jumping. Server and client must produce identical simulations based on input logs, or there will be lots of jumps. This method of "fat client, fat server" prevents flyhacks/speedhacks without causing the "drunken sailor" effect for laggy players.

A Circular buffer of server .. client's input chain would be stored internally, and the server will have to replay inputs based on the time supplied, and account for latency, with limits to prevent what is essentially time travel hacks.

State hashing mechanism to check for server-client desyncs + nondeterminism.

Connection saturation management:

  • Damage control is more important than accuracy: position updates do not need to be perfect
  • Buffering priority should be assigned on a per-packet basis, as well as a per-packet-type basis
  • Global sync mechanism when unimportant data gets behind, when the pipe isn't congested

A proxy server to allow distributing bandwidth costs across multiple servers.

Inter-server communication to allow load distribution

Contexts will be split up into multiple data structures: Data that is important to the OpenGL context, and data that is important for the current scene. That is, data that would be duplicated for a security camera (camera object, world pointer, stage layout, etc.) is going to be moved out of the context structure. This allows the scenes to be rendered across multiple contexts (for example, a security camera view that is split in-game across multiple real world monitors). However, this will require that each scene have its own per-context framebuffer, it be a context itself, or only use contexts which allow object sharing.

It will take some work to figure out where the lines are to be drawn between scene and context.

Contexts will be rendered by dedicated threads, so that drivers that ignore SwapInterval() will not be a problem. Rendering threads will either have a copy of the engine scene graph, or operate on it directly; how well the latter works will decide which I use. Message passing will be used to signify when to upload meshes or resize the window.

A problem with adding concurrency to IntenseLogic is that the current data structures for the scene graph both cannot be accessed safely concurrently, and cannot be copied inexpensively.

A potential solution would be to reorder the scene graph to be flat and in large buffers, as this would allow cheap copies and allocations.

The internals would be as flat arrays:

struct il_world {
    il_base base;
    size_t num_objects, object_capacity;
    float (*position)[4];
    float (*velocity)[4];
    float (*size)[4];
    float (*rotation)[4];
    int *drawable; // using an integer identifier for the rendering information would allow for multiple structures from different contexts to share the same ID, making multi-context rendering viable.
    int *material;
    int *texture;
};

The interface would hide this behind object handles:

il_vec3 il_getPosition(il_world *self, int object);
void il_setVelocity(il_world *self, int object, const il_vec3 velocity);

It would even be possible to flatten the object IDs into a global namespace, but this would only be for the convienience factor.

Forking engine state would only require a handful of allocations (it could even be just 1!) and a few memcpys. This would make it more than practical to have the rendering thread fork the engine state each time it needs to. Because the interface is abstract, it would even be possible to have a mode where only diffs are saved when changes are added, which could be reintegrated into one world from multiple threads at a fence of some sort.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment