- Takes values from a text file and converts them into HTTP requests (against an echo service with a request execution time of 150ms - 200ms).
- splits the text file into chunks of
$CHUNKS
number of items which get executed in sequence by a singlecurl
invocation that reuses the connection (reducing TCP and TLS overhead) $PARALLEL
number of chunks are executed concurrently bysem
(of GNU parallel)
The demo generates 100
requests. Each request's response is stored to disk (because why not).
run-sequentially.sh
executes the 100 requests in sequential curl
invocations. Each invocation fires up a new process, establishes a new TCP connection and does the TLS handshake anew.
run-parallel.sh
executes 5 request batches concurrently. Each batch consists of 10 sequential requests (re)using a single TCP/TLS connection.
➜ time ./run-parallel.sh
…
./run-parallel.sh 1.05s user 0.80s system 30% cpu 6.152 total
➜ time ./run-sequentially.sh
…
./run-sequentially.sh 3.10s user 0.70s system 1% cpu 4:07.13 total
As becomes apparent from those numbers (6 seconds vs. 4 minutes) the parallel execution with reused connections plays in a different league - it's roughly 25 times faster for 100 requests. For 200 requests the parallel execution finished after 10 seconds - guess how long the sequentially executed requests took…
- The chunk size is restricted by the executing machine's resources - that's the one sending the requests.
- The number of parallel executions is restricted by the host machine's resources - that's the one you're bombarding with requests.