Skip to content

Instantly share code, notes, and snippets.

@bruno-
Created November 5, 2021 21:11
Show Gist options
  • Save bruno-/660c5bdfcaa310467c5f88fc0b24f66c to your computer and use it in GitHub Desktop.
Save bruno-/660c5bdfcaa310467c5f88fc0b24f66c to your computer and use it in GitHub Desktop.
Threads vs Async Ruby
require "async"
CONCURRENCY = 1000
ITERATIONS = 100
def work
Async do |task|
CONCURRENCY.times do
task.async do
sleep 1
end
end
end
end
def duration
start = Process.clock_gettime(Process::CLOCK_MONOTONIC)
work
Process.clock_gettime(Process::CLOCK_MONOTONIC) - start
end
def average
ITERATIONS.times.sum {
duration
}.fdiv(ITERATIONS)
end
puts average # => 1.01772911996115
CONCURRENCY = 1000
ITERATIONS = 100
def work
CONCURRENCY.times.map {
Thread.new do
sleep 1
end
}.each(&:join)
end
def duration
start = Process.clock_gettime(Process::CLOCK_MONOTONIC)
work
Process.clock_gettime(Process::CLOCK_MONOTONIC) - start
end
def average
ITERATIONS.times.sum {
duration
}.fdiv(ITERATIONS)
end
puts average # => 1.045861059986055
@ioquatix
Copy link

You should test real world case, like one event loop creating lots of fibers, rather than creating lots of event loops.

@schneems
Copy link

@bruno- thanks for catching my math mistakes 🙊 .

I'm not sure if this is the right way to look at it. I can't prove your math wrong,

In general, I'm nervous when there's a variable that I can arbitrarily change to get different (comparative) results. It makes me worried that I've gamed my own microbenchmark.

Not sure if you've seen but I've done a bunch of work in the space of trying to ensure a benchmark result are actually valid

My conclusion is that the average request duration for threads is 0.2225s or about 11% overhead on average.

That's the average across 200 runs. Each run does 1000 operations which is what I was dividing by (if that makes sense).

I've seen the noah article ages ago, but forgotten about it (thanks for the reminder). The code is here https://github.com/noahgibbs/fiber_basic_benchmarks/tree/master/benchmarks that file got renamed it looks like.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment