The performance of using nginx or vertx as a public network gateway.
Usually a public network gateway handles TLS and gzip, and communicate with backends with raw tcp data. So the programs are tunned to run over tls and compress http body with gzip, level 5.
nginx is about 20 times faster when running short connections (mainly because of TLS handshaking).
nginx is about 5 times faster when running long connections.
nginx 1 worker thread when running long connection, 2 worker thread when running short connection.
vertx 2 worker threads 1 acceptor thread.
nginx and vertx on the same machine.
when running proxy, the connection between nginx and vertx will never close.
--
client 2 threads, on the remote machine.
--
When running long connections, all connections will be established in a very short time, and nginx will dispatch those connections on only one worker, so I limited the worker size to 1.
The long connection tests are made using wrk.
CPU 100%
./wrk -d 60 -c 10 -t 2 --latency -H 'Accept-Encoding: gzip' 'https://server.com'
Running 1m test @ https://server.com
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 581.39us 438.62us 28.49ms 98.65%
Req/Sec 8.99k 0.90k 11.01k 82.95%
Latency Distribution
50% 535.00us
75% 568.00us
90% 623.00us
99% 1.12ms
1075515 requests in 1.00m, 761.06MB read
Requests/sec: 17895.48
Transfer/sec: 12.66MB
CPU 200%
./wrk -d 60 -c 10 -t 2 --latency -H 'Accept-Encoding: gzip' 'https://server.com:8443'
Running 1m test @ https://server.com:8443
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.41ms 0.97ms 34.92ms 94.72%
Req/Sec 3.79k 566.12 5.36k 68.61%
Latency Distribution
50% 1.08ms
75% 1.74ms
90% 2.19ms
99% 5.84ms
452495 requests in 1.00m, 348.68MB read
Requests/sec: 7541.18
Transfer/sec: 5.81MB
CPU 200% (total)
./wrk -d 60 -c 10 -t 2 --latency -H 'Accept-Encoding: gzip' 'https://server.com/vertx/'
Running 1m test @ https://server.com/vertx/
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 0.94ms 649.44us 17.86ms 94.84%
Req/Sec 5.75k 0.85k 7.41k 72.33%
Latency Distribution
50% 802.00us
75% 0.94ms
90% 1.29ms
99% 4.04ms
686648 requests in 1.00m, 536.97MB read
Requests/sec: 11443.72
Transfer/sec: 8.95MB
The long connection tests are made using ab (only one thread on the client side).
We use ab
is because that vertx
won't close the connection even if wrk sends -H 'Connection: Close'
, and there's no way to close the connection without modifying the wrk code.
Note that there are many differences between ab
and wrk
when requesting with tls
. So it's meaningless to compare the following results with the above results.
The keepalive_timeout and keepalive_requests are set to 0 in this test.
CPU 200%
Server Software: nginx/1.6.2
Server Hostname: server.com
Server Port: 443
SSL/TLS Protocol: TLSv1.2,ECDHE-RSA-AES256-GCM-SHA384,2048,256
Document Path: /
Document Length: 867 bytes
Concurrency Level: 10
Time taken for tests: 11.844 seconds
Complete requests: 10000
Failed requests: 0
Total transferred: 10990000 bytes
HTML transferred: 8670000 bytes
Requests per second: 844.30 [#/sec] (mean)
Time per request: 11.844 [ms] (mean)
Time per request: 1.184 [ms] (mean, across all concurrent requests)
Transfer rate: 906.14 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 5 9 2.2 9 48
Processing: 0 2 0.6 2 16
Waiting: 0 2 0.6 2 15
Total: 7 12 2.4 11 52
Percentage of the requests served within a certain time (ms)
50% 11
66% 12
75% 12
80% 12
90% 13
95% 16
98% 20
99% 22
100% 52 (longest request)
CPU 200%
ab -n 10000 -c 10 -H 'Accept-Encoding: gzip' 'https://server.com:8443/'
# .... some request count ....
Server Software:
Server Hostname: server.com
Server Port: 8443
SSL/TLS Protocol: TLSv1.2,ECDHE-RSA-AES256-GCM-SHA384,2048,256
Document Path: /
Document Length: 867 bytes
Concurrency Level: 10
Time taken for tests: 200.370 seconds
Complete requests: 10000
Failed requests: 0
Total transferred: 11150000 bytes
HTML transferred: 8670000 bytes
Requests per second: 49.91 [#/sec] (mean)
Time per request: 200.370 [ms] (mean)
Time per request: 20.037 [ms] (mean, across all concurrent requests)
Transfer rate: 54.34 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 41 131 49.4 124 450
Processing: 0 69 31.0 65 274
Waiting: 0 68 31.0 62 273
Total: 41 200 71.7 191 570
Percentage of the requests served within a certain time (ms)
50% 191
66% 226
75% 247
80% 261
90% 290
95% 321
98% 369
99% 404
100% 570 (longest request)
CPU 200% (nginx 190% and vertx 10%, this is because the nginx is handling the full tls handshake and gzip compression, while vertx only works on tcp and file dispatching)
ab -n 10000 -c 10 -H 'Accept-Encoding: gzip' 'https://server.com/vertx/'
# .... some request count ....
Server Software: nginx/1.6.2
Server Hostname: server.com
Server Port: 443
SSL/TLS Protocol: TLSv1.2,ECDHE-RSA-AES256-GCM-SHA384,2048,256
Document Path: /vertx/
Document Length: 867 bytes
Concurrency Level: 10
Time taken for tests: 12.376 seconds
Complete requests: 10000
Failed requests: 0
Total transferred: 11550000 bytes
HTML transferred: 8670000 bytes
Requests per second: 808.02 [#/sec] (mean)
Time per request: 12.376 [ms] (mean)
Time per request: 1.238 [ms] (mean, across all concurrent requests)
Transfer rate: 911.39 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 4 8 2.0 8 37
Processing: 0 4 1.5 4 20
Waiting: 0 4 1.5 4 20
Total: 5 12 2.7 12 42
Percentage of the requests served within a certain time (ms)
50% 12
66% 13
75% 14
80% 14
90% 15
95% 16
98% 19
99% 23
100% 42 (longest request)
nginx:
user www-data;
worker_processes 1;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 999999999999;
keepalive_requests 999999999999;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
access_log /dev/null;
error_log /dev/null;
gzip on;
gzip_comp_level 5;
upstream vertx {
keepalive 100;
server 127.0.0.1:8080;
}
server {
listen 443 ssl;
ssl_certificate cert.pem;
ssl_certificate_key key.pem;
location /vertx/ {
rewrite /vertx/(.*) /$1 break;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass http://vertx;
}
location / {
root /var/www/html;
index index.html;
}
}
}
vertx
package mytest;
import io.vertx.core.AbstractVerticle;
import io.vertx.core.DeploymentOptions;
import io.vertx.core.Vertx;
import io.vertx.core.VertxOptions;
import io.vertx.core.http.HttpServerOptions;
import io.vertx.core.net.PemKeyCertOptions;
import io.vertx.ext.web.Router;
import io.vertx.ext.web.handler.StaticHandler;
public class Main {
public static void main(String[] args) {
int size = Integer.parseInt(args[0]);
Vertx vertx = Vertx.vertx(new VertxOptions());
// work the same as nginx
vertx.deployVerticle(() -> new AbstractVerticle() {
@Override
public void start() {
Router router = Router.router(vertx);
router.get("/*").handler(StaticHandler.create());
vertx.createHttpServer(
new HttpServerOptions()
.setSsl(true)
.setCompressionSupported(true)
.setCompressionLevel(5)
.setPemKeyCertOptions(
new PemKeyCertOptions()
.setCertPath("/etc/nginx/cert.pem")
.setKeyPath("/etc/nginx/key.pem"))
).requestHandler(router).listen(8443);
}
}, new DeploymentOptions().setInstances(size));
// work as raw tcp(http) server
vertx.deployVerticle(() -> new AbstractVerticle() {
@Override
public void start() {
Router router = Router.router(vertx);
router.get("/*").handler(StaticHandler.create());
vertx.createHttpServer().requestHandler(router).listen(8080);
}
}, new DeploymentOptions().setInstances(size));
}
}