Skip to content

Instantly share code, notes, and snippets.

@simonw
Created June 4, 2018 21:17
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save simonw/418950af178c01c416363cc057420851 to your computer and use it in GitHub Desktop.
Save simonw/418950af178c01c416363cc057420851 to your computer and use it in GitHub Desktop.
Very rough benchmarking of Sanic v.s. Uvicorn

Really basic load test

I ran Apache Bench like so:

ab -n 1000 -c 100 'http://127.0.0.1:8000/'

uvicorn

Server Software:        uvicorn
Server Hostname:        127.0.0.1
Server Port:            8000

Document Path:          /
Document Length:        13 bytes

Concurrency Level:      100
Time taken for tests:   0.130 seconds
Complete requests:      1000
Failed requests:        0
Total transferred:      132000 bytes
HTML transferred:       13000 bytes
Requests per second:    7672.07 [#/sec] (mean)
Time per request:       13.034 [ms] (mean)
Time per request:       0.130 [ms] (mean, across all concurrent requests)
Transfer rate:          988.98 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    2   1.1      1       6
Processing:     2   11   6.5      9      36
Waiting:        1    9   6.0      7      31
Total:          4   13   6.5     10      37

Percentage of the requests served within a certain time (ms)
  50%     10
  66%     13
  75%     15
  80%     16
  90%     25
  95%     29
  98%     30
  99%     34
 100%     37 (longest request)

sanic (before adding access_log=False)

Server Hostname:        127.0.0.1
Server Port:            8000

Document Path:          /
Document Length:        17 bytes

Concurrency Level:      100
Time taken for tests:   0.316 seconds
Complete requests:      1000
Failed requests:        0
Total transferred:      107000 bytes
HTML transferred:       17000 bytes
Requests per second:    3168.34 [#/sec] (mean)
Time per request:       31.562 [ms] (mean)
Time per request:       0.316 [ms] (mean, across all concurrent requests)
Transfer rate:          331.07 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    2   1.4      2       9
Processing:     5   28   6.9     29      43
Waiting:        2   24   6.4     25      39
Total:         10   30   6.8     31      45

Percentage of the requests served within a certain time (ms)
  50%     31
  66%     33
  75%     35
  80%     36
  90%     40
  95%     42
  98%     43
  99%     43
 100%     45 (longest request)

Sanic with access_log=False

Server Software:        
Server Hostname:        127.0.0.1
Server Port:            8000

Document Path:          /
Document Length:        17 bytes

Concurrency Level:      100
Time taken for tests:   0.152 seconds
Complete requests:      1000
Failed requests:        0
Total transferred:      107000 bytes
HTML transferred:       17000 bytes
Requests per second:    6569.61 [#/sec] (mean)
Time per request:       15.222 [ms] (mean)
Time per request:       0.152 [ms] (mean, across all concurrent requests)
Transfer rate:          686.47 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    2   1.9      2      11
Processing:     2   12   4.7     11      27
Waiting:        1   10   4.4      9      22
Total:          6   15   4.4     14      28

Percentage of the requests served within a certain time (ms)
  50%     14
  66%     16
  75%     17
  80%     19
  90%     21
  95%     23
  98%     25
  99%     25
 100%     28 (longest request)
from sanic import Sanic
from sanic.response import json
app = Sanic()
@app.route('/')
async def test(request):
return json({'hello': 'world'})
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8000, access_log=False)
python3 -m venv venv
source venv/bin/activate
pip install sanic
pip install uvicorn
class App():
def __init__(self, scope):
self.scope = scope
async def __call__(self, receive, send):
await send({
'type': 'http.response.start',
'status': 200,
'headers': [
[b'content-type', b'text/plain'],
]
})
await send({
'type': 'http.response.body',
'body': b'Hello, world!',
})
@ibnbay00
Copy link

ibnbay00 commented Oct 7, 2018

how to run uvicorn_helloworld.py?
the answer is:
uvicorn uvicorn_helloworld:App
Why I get more higher on uvicorn?

Percentage of the requests served within a certain time (ms)
  50%   5022
  66%   5023
  75%   5024
  80%   5025
  90%   5026
  95%   5026
  98%   5026
  99%   5026
 100%   5026 (longest request)

Don't know ur test can be fast

NB:

  • Python 3.6.6
  • sanic 0.8.3
  • uvicorn 0.3.9

@MuppetPasta
Copy link

Running a similar test, I also am seeing that uvicorn is faster than sanic on trivial short requests. One thing you've left out is that uvicorn also gets significantly faster with acces_log=False

My results with -n 100000 -c 200 (total time/95%ile request time/RPS)

  • With access_log:
    • sanic: 61.2438 / 0.1954 / 1632.8177
    • uvicorn: 38.1257 / 0.1576 / 2622.9050
  • Without access_log:
    • sanic: 23.9423 / 0.0713 / 4176.7052
    • uvicorn: 16.9453 / 0.0642 / 5901.3568

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment