Skip to content

Instantly share code, notes, and snippets.

@kevinjqiu
Created April 10, 2015 05:26
Show Gist options
  • Save kevinjqiu/a938f7c4d759f593d373 to your computer and use it in GitHub Desktop.
Save kevinjqiu/a938f7c4d759f593d373 to your computer and use it in GitHub Desktop.

Request handling models

Abstraction layer

  • What is wsgi

PEP3333, a web server gateway interface, specifies how web server should communicate with web applications and how web applications can be chained together to process one request

  • Benefit?

so that different web frameworks can work with different web servers e.g., gunicorn + django or tornado + pyramid

IO models

Slow client problem

slowloris attack

Multi-process/blocking IO

Read/write will block the process so each process can handle 1 client at a time

Concurrency is achieved by spawning more processes

Pros:

  • No multi-threading issues

Cons:

  • Heavy-weight, because forking child process copies the parent process's memory space
  • Suffers from slow client problem and must be behind a buffering reverse proxy

Multi-threaded/blocking IO

Each process has multiple threads and each thread handles 1 client at a time. Concurrency is achieved by spawning more threads

Pros:

  • Still pretty easy to work with (compared to evented model)

Cons:

  • Application code must be thread-safe (This means using threadlocal for data storage and sqlalchemy scoped session)
  • Pre-emptive context switching means the OS can context switch your program in the middle of CPU-bound code, and b/c Python's GIL, this makes CPU-bound code slower than single-threaded environment.
  • Also suffers from slow client problem. Needs to be behind buffering reverse proxy

Evented IO (Non-blocking)

IO calls do not block the process but write callbacks to respond to IO ready events.

Pros:

  • Can handle virtually unlimited amount of IO with very little resource
  • Completely immune to slow client problem, no need for proxy

Cons:

  • Callback-hell

Gevent (co-operative mutlithreading)

Similar to multi-threading/blocking IO, but instead of waiting for IO, gevent workers are able to "yield" so it can be used to serve other requests. The operating system cannot preemptively context switch gevent workers.

Pros:

  • Can handle lots of concurrent requests with low resource footprint
  • Familiar programming model (not callback hell)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment