Skip to content

Instantly share code, notes, and snippets.

@nszceta
Last active March 9, 2019 07:03
Show Gist options
  • Star 5 You must be signed in to star a gist
  • Fork 2 You must be signed in to fork a gist
  • Save nszceta/087a14d8896e47bc6e7e441fd60b6ff4 to your computer and use it in GitHub Desktop.
Save nszceta/087a14d8896e47bc6e7e441fd60b6ff4 to your computer and use it in GitHub Desktop.
# thanks Eli! https://github.com/seemethere
import os
import asyncio
import uvloop
from asyncpg import connect, create_pool
from sanic import Sanic
from sanic.response import json
DB_CONFIG = {} # FIXME: your DB config here
def jsonify(records):
"""
Parse asyncpg record response into JSON format
"""
return [dict(r.items()) for r in records]
app = Sanic(__name__)
@app.listener('before_server_start')
async def register_db(app, loop):
app.pool = await create_pool(**DB_CONFIG, loop=loop, max_size=100)
async with app.pool.acquire() as connection:
await connection.execute('DROP TABLE IF EXISTS sanic_post')
await connection.execute("""CREATE TABLE sanic_post (
id serial primary key,
content varchar(50),
post_date timestamp
);""")
for i in range(0, 1000):
await connection.execute(f"""INSERT INTO sanic_post
(id, content, post_date) VALUES ({i}, {i}, now())""")
@app.get('/')
async def root_get(request):
async with app.pool.acquire() as connection:
results = await connection.fetch('SELECT * FROM sanic_post')
return json({'posts': jsonify(results)})
if __name__ == '__main__':
app.run(host='127.0.0.1', port=8080)
@egoag
Copy link

egoag commented Apr 28, 2017

Thanks for this demo, I'm a beginner of sanic and asyncpg, it help me much. But I ran this code and it seems my server can only handle about 220 requests per seconds, I'm super confused.
My environment:

  • sanic 0.5.2
  • asyncpg 0.10.1
  • i5 CPU + 16G RAM + MacOS 10.12
  • PostgreSQL 9.6.1 (local)
  • ApacheBench, Version 2.3
Concurrency Level:      100
Time taken for tests:   46.375 seconds
Complete requests:      10000
Failed requests:        0
Total transferred:      498840000 bytes
HTML transferred:       497910000 bytes
Requests per second:    215.64 [#/sec] (mean)
Time per request:       463.746 [ms] (mean)
Time per request:       4.637 [ms] (mean, across all concurrent requests)
Transfer rate:          10504.63 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    1   1.0      1      12
Processing:    79  462 269.8    426    3157
Waiting:       79  440 233.8    412    3157
Total:         83  463 269.8    427    3158

Percentage of the requests served within a certain time (ms)
  50%    427
  66%    442
  75%    457
  80%    465
  90%    496
  95%    533
  98%    617
  99%   2990
 100%   3158 (longest request)

Is there any possible reason could lead to this result? Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment