Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save josevalim/d3bf2f0654e1abe36c9e to your computer and use it in GitHub Desktop.
Save josevalim/d3bf2f0654e1abe36c9e to your computer and use it in GitHub Desktop.
Phoenix Showdown Comparative Benchmarks @ Rackspace

Comparative Benchmark Numbers @ Rackspace

I've taken the benchmarks from Matthew Rothenberg's phoenix-showdown, updated Phoenix to 0.13.1 and ran the tests on the most powerful machines available at Rackspace.

Results

Framework Throughput (req/s) Latency (ms) Consistency (σ ms)
Plug 198328.21 0.63ms 2.22ms
Phoenix 0.13.1 179685.94 0.61ms 1.04ms
Gin 176156.41 0.65ms 0.57ms
Play 171236.03 1.89ms 14.17ms
Phoenix 0.9.0-dev 169030.24 0.59ms 0.30ms
Express Cluster 92064.94 1.24ms 1.07ms
Martini 32077.24 3.35ms 2.52ms
Sinatra 30561.95 3.50ms 2.53ms
Rails 11903.48 8.50ms 4.07ms

Environment

Environment Version
Erlang 17.5
Elixir 1.1.0-dev
Ruby 2.2.0
Node 0.12.4
Go 1.3.3
Java 8u45

Server

Rackspace OnMetal IO
CPU Dual 2.8 Ghz, 10 core Intel® Xeon® E5-2680 v2
RAM	128 GB
System Disk	32 GB
Data Disk	Dual 1.6 TB PCIe flash cards
Network	Redundant 10 Gb / s connections in a high availability bond
Disk I/O	Good
Price	$2.4658 / hr

Client

Rackspace OnMetal Compute
CPU	2.8 Ghz, 10 core Intel® Xeon® E5-2680 v2
RAM	32 GB
System Disk	32 GB
Network	Redundant 10 Gb / s connections in a high availability bond
Disk I/O	Good
Price	$0.7534 / hr

Detailed Results

PHOENIX 0.9.0-dev (original)

server:~# MIX_ENV=prod elixir -pa _build/prod/consolidated -S mix phoenix.server
client:~# wrk -t20 -c100 -d30S --timeout 2000 "http://10.184.11.239:4000/showdown"
Running 30s test @ http://10.184.11.239:4000/showdown
  20 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   595.50us  398.08us  62.05ms   99.05%
    Req/Sec     8.50k   304.25     9.20k    70.34%
  5087667 requests in 30.10s, 10.42GB read
Requests/sec: 169030.24
Transfer/sec:    354.47MB

PHOENIX 0.13.1 (current)

server:~# MIX_ENV=prod elixir -pa _build/prod/consolidated -S mix phoenix.server
client:~# wrk -t20 -c100 -d30S --timeout 2000 "http://10.184.13.193:4000/showdown"
Running 30s test @ http://10.184.13.193:4000/showdown
  20 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   611.61us    1.04ms  95.57ms   99.05%
    Req/Sec     9.04k   656.30    10.98k    71.84%
  5408349 requests in 30.10s, 11.08GB read
Requests/sec: 179685.94
Transfer/sec:    376.82MB

PLUG

server:~# MIX_ENV=prod elixir -pa _build/prod/consolidated -S mix server
client:~# wrk -t20 -c100 -d30S --timeout 2000 "http://10.184.11.239:4000/showdown"
Running 30s test @ http://10.184.11.239:4000/showdown
  20 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   631.83us    2.22ms 217.98ms   99.05%
    Req/Sec     9.99k   761.02    18.66k    78.99%
  5969613 requests in 30.10s, 12.00GB read
Requests/sec: 198328.21
Transfer/sec:    408.34MB

PLAY

server:~# ./activator clean stage
server:~# ./target/universal/stage/bin/play-scala -Dhttp.port=5000
client:~# wrk -t20 -c100 -d30S --timeout 2000 "http://10.184.11.239:5000/showdown"
Running 30s test @ http://10.184.11.239:5000/showdown
  20 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.89ms   14.17ms 349.18ms   98.57%
    Req/Sec     8.80k     1.58k   23.45k    86.04%
  5154091 requests in 30.10s, 9.82GB read
Requests/sec: 171236.03
Transfer/sec:    334.12MB

RAILS

server:~# PUMA_WORKERS=20 MIN_THREADS=1 MAX_THREADS=80 RACK_ENV=production bundle exec puma
client:~# wrk -t20 -c100 -d30S --timeout 2000 "http://10.184.11.239:3000/showdown"
Running 30s test @ http://10.184.11.239:3000/showdown
  20 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     8.50ms    4.07ms  61.71ms   70.94%
    Req/Sec   598.10     40.81   740.00     71.03%
  357320 requests in 30.02s, 788.91MB read
Requests/sec:  11903.48
Transfer/sec:     26.28MB

SINATRA

server:~# RACK_ENV=production bundle exec puma -t 1:16 -w 20 --preload
client:~# wrk -t20 -c100 -d30S --timeout 2000 "http://10.184.11.239:9292/showdown"
Running 30s test @ http://10.184.11.239:9292/showdown
  20 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     3.50ms    2.53ms  56.23ms   74.62%
    Req/Sec     1.54k   414.73     5.33k    65.96%
  919897 requests in 30.10s, 1.85GB read
Requests/sec:  30561.95
Transfer/sec:     63.04MB

GIN

server:~# GOMAXPROCS=20 GIN_MODE=release go run server.go
client:~# wrk -t20 -c100 -d30S --timeout 2000 "http://10.184.11.239:3000/showdown"
Running 30s test @ http://10.184.11.239:3000/showdown
  20 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   655.59us  573.14us  10.11ms   89.21%
    Req/Sec     8.86k   759.03    15.02k    82.84%
  5302230 requests in 30.10s, 10.31GB read
Requests/sec: 176156.41
Transfer/sec:    350.61MB

MARTINI

server:~# GOMAXPROCS=20 MARTINI_ENV=production go run server.go
client:~# wrk -t20 -c100 -d30S --timeout 2000 "http://10.184.11.239:3000/showdown"
Running 30s test @ http://10.184.11.239:3000/showdown
  20 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     3.35ms    2.52ms  32.73ms   60.45%
    Req/Sec     1.61k    99.91     1.98k    68.77%
  962648 requests in 30.01s, 1.87GB read
Requests/sec:  32077.24
Transfer/sec:     63.84MB

EXPRESS (20-node cluster)

server:~# NODE_ENV=production node server.js -w 20
client:~# wrk -t20 -c100 -d30S --timeout 2000 "http://10.184.11.239:3000/showdown"
Running 30s test @ http://10.184.11.239:3000/showdown
  20 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.24ms    1.07ms  26.92ms   83.39%
    Req/Sec     4.63k     2.37k   13.34k    74.60%
  2771185 requests in 30.10s, 5.65GB read
Requests/sec:  92064.94
Transfer/sec:    192.37MB

Bonus Track

ASP.Net 4.5.1 MVC 5 on IIS 8

I've benchmarked ASP.Net, since I do .Net at my day to day job. Rackspace doesn't offer an OnMetal instance with windows, only virtualized instances, so I've done the benchmarks on the most powerful one I could find.

On a Rackspace I/O v1 120GB instance is listed as:

CPU:	32 vCPUs
RAM:	120 GB
System Disk:	40 GB
Data Disk:	1.2 TB (4 disks)
Network:	10 Gb / s
Disk I/O:	Best

Results:

root@client:~# wrk -t20 -c100 -d30S --timeout 2000 "http://10.223.241.211/showdown"
Running 30s test @ http://10.223.241.211/showdown
  20 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    62.98ms  104.13ms 962.77ms   81.84%
    Req/Sec     1.02k   435.75     2.92k    70.76%
  581748 requests in 30.02s, 1.39GB read
Requests/sec:  19381.05
Transfer/sec:     47.37MB

Results were pretty horrid, but maybe it's the virtualization overhead, or maybe Rackspace's Windows virtualization is bad, or even the 10gbit network between OnMetal instances and virtualized instances is not that good. I might do more testing this weekend to try to figure that out.

@Golam81
Copy link

Golam81 commented Sep 2, 2020

I just dont understand why in techempower phoenix performs so poorly, even in round 19 play2 and gin blasted phoenix

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment