Skip to content

Instantly share code, notes, and snippets.

@pfreixes
Last active May 16, 2021 08:17
Show Gist options
  • Star 15 You must be signed in to star a gist
  • Fork 2 You must be signed in to fork a gist
  • Save pfreixes/a863eefab923e2addb00d6561901f915 to your computer and use it in GitHub Desktop.
Save pfreixes/a863eefab923e2addb00d6561901f915 to your computer and use it in GitHub Desktop.
Gevent vs asyncio with libuv

The numbers claimed by this benchamark about Gevent [1] comparaed with the numbers got by Asyncio with the uvloop and even with the default loop has left me a bit frozen. Ive repeated a few of them : gevent, asyncio, asyncio-uvloop and go for the echo server and these are the numbers roughly:

For gevent

$ ./echo_client
685393 0.98KiB messages in 30 seconds
Latency: min 0.04ms; max 4.48ms; mean 0.126ms; std: 0.048ms (37.68%)
Latency distribution: 25% under 0.088ms; 50% under 0.122ms; 75% under 0.158ms; 90% under 0.182ms; 99% under 0.242ms; 99.99% under 0.91ms
Requests/sec: 22846.43
Transfer/sec: 21.79MiB

For asyncio

$ ./echo_client
286039 0.98KiB messages in 30 seconds
Latency: min 0.04ms; max 2.51ms; mean 0.309ms; std: 0.074ms (23.85%)
Latency distribution: 25% under 0.286ms; 50% under 0.304ms; 75% under 0.328ms; 90% under 0.378ms; 99% under 0.514ms; 99.99% under 1.489ms
Requests/sec: 9534.63
Transfer/sec: 9.09MiB

For asyncio-uvloop

$ ./echo_client
617248 0.98KiB messages in 30 seconds
Latency: min 0.04ms; max 9.32ms; mean 0.141ms; std: 0.057ms (40.76%)
Latency distribution: 25% under 0.1ms; 50% under 0.13ms; 75% under 0.174ms; 90% under 0.218ms; 99% under 0.279ms; 99.99% under 1.099ms
Requests/sec: 20574.93
Transfer/sec: 19.62MiB

For go:

$ ./echo_client
1341627 0.98KiB messages in 30 seconds
Latency: min 0.03ms; max 16.46ms; mean 0.061ms; std: 0.039ms (63.55%)
Latency distribution: 25% under 0.051ms; 50% under 0.058ms; 75% under 0.067ms; 90% under 0.077ms; 99% under 0.118ms; 99.99% under 1.147ms
Requests/sec: 44720.9
Transfer/sec: 42.65MiB

The CPU system used is :

Processor Name:	Intel Core i5
Processor Speed:	2.9 GHz
Number of Processors:	1
Total Number of Cores:	2

And Python version + packages

$ python --version
Python 3.5.1
$ pip freeze
gevent==1.1.1
greenlet==0.4.9
numpy==1.11.0
pyuv==1.2.0
uvent==0.3.0
uvloop==0.4.11
wheel==0.24.0

[1] http://magic.io/blog/uvloop-make-python-networking-great-again/

@rfyiamcool
Copy link

gevent is best?

@smetj
Copy link

smetj commented Mar 7, 2018

@pfreixes perhaps try with PyPy + Gevent ...

@tty02-fl
Copy link

tty02-fl commented Apr 9, 2018

I made a more recent test with fresh versions but much more heavy requests at server side

python --version
Python 3.6.3

pip freeze
bottle==0.12.13
Flask==0.12
gevent==1.3a2
greenlet==0.4.13
sanic==0.7.0

This is the result:

  1. sanic asyncio-uvloop ~252 req/second (20% faster)
  2. gevent-wsgi-bottle ~200 req/second
  3. gevent-wsgi-flask ~194 req/second

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment