#UDP Sending Strategy Experiment
Tested 4 different strategies for sending UDP packets in Node.js: create/destroy socket in series, create/destroy socket in parallel, cached socket in series, and cached socket in parallel.
This strategy would create a UDP4 socket every time it wanted to send something, and destroy it when it was done. It would also dispatch events one-at-a-time, so the network performance would effect how long it took to send all messages.
- Server: Received ~4900 messages per second.
- Client: Took 20.4 seconds to send 100000 messages.
This strategy would create a UDP4 socket every time it wanted to send something, and destroy it when it was done. It would try to dispatch all events synchronously, and depend on Node.js backpressure to mitigate the network overhead.
- Server: ** Crashed: Out of file handles **
- Client: ** Crashed: Out of file handles **
This strategy would create one UDP4 socket, and use it for all connections. It would also dispatch events one-at-a-time, so the network performance would effect how long it took to send all messages.
- Server: Received ~6300 messages per second.
- Client: Took 15.8 seconds to send 100000 messages.
This strategy would create one UDP4 socket, and use it for all connections. It would try to dispatch all events synchronously, and depend on Node.js backpressure to mitigate the network overhead.
- Server: Received ~13300 messages per second.
- Client: Took 7.76 seconds to send 100000 messages.
- Other: 3.9% Packet loss (Server UDP buffer too full, probably)
Caching a socket always performs better. Dispatching messages asynchronously give incredible performance, but if server gets swamped, you can get packet loss. Series dispatches not losing packets makes sense, since only one message is in flight at a time. (Pretty much.)