Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
Raw node http server vs stock node http server. (measurements done on a cheap PC laptop using Ubuntu Linux)
require('http').createServer(function (req, res) {
res.writeHead(200, {
"Content-Length": 12
});
res.end("Hello World\n");
}).listen(3000);
Server Software:
Server Hostname: 127.0.0.1
Server Port: 3000
Document Path: /
Document Length: 12 bytes
Concurrency Level: 100
Time taken for tests: 5.000 seconds
Complete requests: 37431
Failed requests: 0
Write errors: 0
Keep-Alive requests: 37431
Total transferred: 4192272 bytes
HTML transferred: 449172 bytes
Requests per second: 7486.11 [#/sec] (mean)
Time per request: 13.358 [ms] (mean)
Time per request: 0.134 [ms] (mean, across all concurrent requests)
Transfer rate: 818.79 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 4
Processing: 9 13 2.0 13 60
Waiting: 9 13 2.0 13 60
Total: 9 13 2.0 13 62
Percentage of the requests served within a certain time (ms)
50% 13
66% 14
75% 14
80% 14
90% 14
95% 14
98% 16
99% 19
100% 62 (longest request)
var HTTPParser = process.binding('http_parser').HTTPParser;
var TCP = process.binding('tcp_wrap').TCP;
var cannedResponse = new Buffer(
"HTTP1.1 200 OK\r\n" +
"Content-Length: 12\r\n" +
"Connection: Keep-Alive\r\n" +
"\r\n" +
"Hello World\n"
);
function noop() {}
var server = new TCP();
server.bind("0.0.0.0", 3000);
server.listen(200);
server.onconnection = function (client) {
var parser = new HTTPParser(HTTPParser.REQUEST);
parser.onHeadersComplete = function (info) {
// console.log("info", info);
};
parser.onBody = function (buffer, offset, length) {
buffer = buffer.slice(offset, length);
// console.log("body", buffer);
};
parser.onMessageComplete = function () {
var ret = client.writeBuffer(cannedResponse);
ret.oncomplete = function () {
// client.close();
};
};
client.onread = function (buffer, offset, length) {
if (buffer) {
parser.execute(buffer, offset, length);
}
else {
parser.finish();
client.close();
}
};
client.readStart();
};
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Completed 5000 requests
Completed 10000 requests
Completed 15000 requests
Completed 20000 requests
Completed 25000 requests
Completed 30000 requests
Completed 35000 requests
Completed 40000 requests
Completed 45000 requests
Completed 50000 requests
Finished 50000 requests
Server Software:
Server Hostname: 127.0.0.1
Server Port: 3000
Document Path: /
Document Length: 12 bytes
Concurrency Level: 100
Time taken for tests: 2.191 seconds
Complete requests: 50000
Failed requests: 0
Write errors: 0
Non-2xx responses: 50000
Keep-Alive requests: 50000
Total transferred: 3700000 bytes
HTML transferred: 600000 bytes
Requests per second: 22821.44 [#/sec] (mean)
Time per request: 4.382 [ms] (mean)
Time per request: 0.044 [ms] (mean, across all concurrent requests)
Transfer rate: 1649.21 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 3
Processing: 1 4 0.9 4 11
Waiting: 1 4 0.9 4 11
Total: 1 4 0.9 4 13
Percentage of the requests served within a certain time (ms)
50% 4
66% 4
75% 4
80% 5
90% 6
95% 6
98% 7
99% 7
100% 13 (longest request)
@mahidhiman

This comment has been minimized.

Copy link

commented Nov 5, 2018

so can you tell me which one is better and should i it seems that raw one is faster but should i use it in things or not

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.