Skip to content

Instantly share code, notes, and snippets.

@nszceta
Last active March 9, 2019 07:03
Show Gist options
  • Star 5 You must be signed in to star a gist
  • Fork 2 You must be signed in to fork a gist
  • Save nszceta/087a14d8896e47bc6e7e441fd60b6ff4 to your computer and use it in GitHub Desktop.
Save nszceta/087a14d8896e47bc6e7e441fd60b6ff4 to your computer and use it in GitHub Desktop.
# thanks Eli! https://github.com/seemethere
import os
import asyncio
import uvloop
from asyncpg import connect, create_pool
from sanic import Sanic
from sanic.response import json
DB_CONFIG = {} # FIXME: your DB config here
def jsonify(records):
"""
Parse asyncpg record response into JSON format
"""
return [dict(r.items()) for r in records]
app = Sanic(__name__)
@app.listener('before_server_start')
async def register_db(app, loop):
app.pool = await create_pool(**DB_CONFIG, loop=loop, max_size=100)
async with app.pool.acquire() as connection:
await connection.execute('DROP TABLE IF EXISTS sanic_post')
await connection.execute("""CREATE TABLE sanic_post (
id serial primary key,
content varchar(50),
post_date timestamp
);""")
for i in range(0, 1000):
await connection.execute(f"""INSERT INTO sanic_post
(id, content, post_date) VALUES ({i}, {i}, now())""")
@app.get('/')
async def root_get(request):
async with app.pool.acquire() as connection:
results = await connection.fetch('SELECT * FROM sanic_post')
return json({'posts': jsonify(results)})
if __name__ == '__main__':
app.run(host='127.0.0.1', port=8080)
@nszceta
Copy link
Author

nszceta commented Mar 10, 2017

In this benchmark I describe how to obtain a 17% performance improvement over the current asyncpg demo code by using a global connection pool. All benchmarks were performed on my plugged in 8 core i7 laptop.

Use Revision 3 if possible. If you are trying to integrate other database drivers you may find inspiration in the other revisions, but the third one offers the cleanest integration and best performance.

# Apache Benchmark tool (https://www.cambus.net/benchmarking-http-servers/):
ab -c100 -n10000 http://127.0.0.1:8000/

Revision 1

Concurrency Level:      100
Time taken for tests:   57.805 seconds
Complete requests:      10000
Failed requests:        0
Total transferred:      498840000 bytes
HTML transferred:       497910000 bytes
Requests per second:    173.00 [#/sec] (mean)
Time per request:       578.049 [ms] (mean)
Time per request:       5.780 [ms] (mean, across all concurrent requests)
Transfer rate:          8427.47 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.5      0      23
Processing:    39  575  30.8    575     636
Waiting:       39  575  30.8    575     635
Total:         46  575  30.4    576     637

Percentage of the requests served within a certain time (ms)
  50%    576
  66%    581
  75%    586
  80%    589
  90%    595
  95%    599
  98%    602
  99%    603
 100%    637 (longest request)

Revision 2

Concurrency Level:      100
Time taken for tests:   38.268 seconds
Complete requests:      10000
Failed requests:        0
Total transferred:      498840000 bytes
HTML transferred:       497910000 bytes
Requests per second:    261.32 [#/sec] (mean)
Time per request:       382.679 [ms] (mean)
Time per request:       3.827 [ms] (mean, across all concurrent requests)
Transfer rate:          12729.95 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.2      0       3
Processing:    14  380  20.2    382     416
Waiting:       14  380  20.2    382     416
Total:         17  381  20.0    382     417

Percentage of the requests served within a certain time (ms)
  50%    382
  66%    384
  75%    385
  80%    386
  90%    389
  95%    392
  98%    395
  99%    399
 100%    417 (longest request)

Revision 2.1 (unpublished, same as 2 but without using asyncio.Lock)

Concurrency Level:      100
Time taken for tests:   34.573 seconds
Complete requests:      10000
Failed requests:        0
Total transferred:      498840000 bytes
HTML transferred:       497910000 bytes
Requests per second:    289.24 [#/sec] (mean)
Time per request:       345.728 [ms] (mean)
Time per request:       3.457 [ms] (mean, across all concurrent requests)
Transfer rate:          14090.50 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.3      0       9
Processing:    60  345  48.7    361     536
Waiting:       49  325  50.8    341     499
Total:         61  345  48.7    362     536

Percentage of the requests served within a certain time (ms)
  50%    362
  66%    374
  75%    379
  80%    381
  90%    387
  95%    399
  98%    430
  99%    455
 100%    536 (longest request)

Revision 2.2 (same as 2.1, but with transaction management as in Revision 1)

Concurrency Level:      100
Time taken for tests:   35.114 seconds
Complete requests:      10000
Failed requests:        0
Total transferred:      498840000 bytes
HTML transferred:       497910000 bytes
Requests per second:    284.79 [#/sec] (mean)
Time per request:       351.139 [ms] (mean)
Time per request:       3.511 [ms] (mean, across all concurrent requests)
Transfer rate:          13873.39 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    1   0.4      0       4
Processing:   110  350  53.7    355     552
Waiting:       86  336  55.2    343     537
Total:        113  351  53.7    356     552
ERROR: The median and mean for the initial connection time are more than twice the standard
       deviation apart. These results are NOT reliable.

Percentage of the requests served within a certain time (ms)
  50%    356
  66%    383
  75%    392
  80%    398
  90%    407
  95%    419
  98%    446
  99%    457
 100%    552 (longest request)

Revision 3 (use this one) (thanks to Eli https://github.com/seemethere)

Concurrency Level:      100
Time taken for tests:   34.191 seconds
Complete requests:      10000
Failed requests:        0
Total transferred:      498840000 bytes
HTML transferred:       497910000 bytes
Requests per second:    292.47 [#/sec] (mean)
Time per request:       341.911 [ms] (mean)
Time per request:       3.419 [ms] (mean, across all concurrent requests)
Transfer rate:          14247.83 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    1   0.3      1       5
Processing:    81  340  47.5    351     556
Waiting:       77  321  49.4    333     542
Total:         84  341  47.5    352     556

Percentage of the requests served within a certain time (ms)
  50%    352
  66%    370
  75%    376
  80%    378
  90%    386
  95%    399
  98%    424
  99%    447
 100%    556 (longest request)

The official Sanic asyncpg example in upstream opens a new connection for each request:
https://github.com/channelcat/sanic/blob/88bf78213ffdc168330cfc135b8a25706ef0b1ef/examples/sanic_asyncpg_example.py

Concurrency Level:      100
Time taken for tests:   41.608 seconds
Complete requests:      10000
Failed requests:        0
Total transferred:      498840000 bytes
HTML transferred:       497910000 bytes
Requests per second:    240.34 [#/sec] (mean)
Time per request:       416.083 [ms] (mean)
Time per request:       4.161 [ms] (mean, across all concurrent requests)
Transfer rate:          11707.96 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   23 143.5      0    2164
Processing:     7  392 102.2    397    2298
Waiting:        7  359 100.8    365    2297
Total:          7  415 169.0    398    3085

Percentage of the requests served within a certain time (ms)
  50%    398
  66%    407
  75%    413
  80%    420
  90%    450
  95%    576
  98%    941
  99%   1169
 100%   3085 (longest request)

@egoag
Copy link

egoag commented Apr 28, 2017

Thanks for this demo, I'm a beginner of sanic and asyncpg, it help me much. But I ran this code and it seems my server can only handle about 220 requests per seconds, I'm super confused.
My environment:

  • sanic 0.5.2
  • asyncpg 0.10.1
  • i5 CPU + 16G RAM + MacOS 10.12
  • PostgreSQL 9.6.1 (local)
  • ApacheBench, Version 2.3
Concurrency Level:      100
Time taken for tests:   46.375 seconds
Complete requests:      10000
Failed requests:        0
Total transferred:      498840000 bytes
HTML transferred:       497910000 bytes
Requests per second:    215.64 [#/sec] (mean)
Time per request:       463.746 [ms] (mean)
Time per request:       4.637 [ms] (mean, across all concurrent requests)
Transfer rate:          10504.63 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    1   1.0      1      12
Processing:    79  462 269.8    426    3157
Waiting:       79  440 233.8    412    3157
Total:         83  463 269.8    427    3158

Percentage of the requests served within a certain time (ms)
  50%    427
  66%    442
  75%    457
  80%    465
  90%    496
  95%    533
  98%    617
  99%   2990
 100%   3158 (longest request)

Is there any possible reason could lead to this result? Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment