Skip to content

Instantly share code, notes, and snippets.

@fidelisrafael
Forked from wycats/ruby-promises.md
Created October 5, 2013 15:07
Show Gist options
  • Save fidelisrafael/6842057 to your computer and use it in GitHub Desktop.
Save fidelisrafael/6842057 to your computer and use it in GitHub Desktop.

Today, in Ruby, if I want to make a network request, I block

response = Net::HTTP.get("/post/1")

If I want to do multiple requests in parallel, I can use a thread per request

responses = ["/post/1", "/post/2"].map do |url|
  Thread.new { Net::HTTP.get(url) }
end.map(&:value)

So a thread, with its value, can basically be used as a future value.

However, this means that you need a stack in memory for every HTTP request, even though we don't really care about running any interesting code in the thread.

Additionally, because we're running code in threads, we get somewhat wonky exception semantics. You can try to abort_on_exception, but it's awkward to get synchronous exceptions out of that asynchronous work.

For example, if you have a single Net::HTTP request that fails with an exception:

begin
  # typo, but a better example would be an exception in the Net::HTTP code itself or a problem on the socket
  # that is conceptually an error
  Net::HTTP.geet("/posts/1")
rescue => e
  # this block is called
end

If you have many Net::HTTP requests, you can't get synchronous exceptions:

begin
  urs.map { |u| Thread.new { Net::HTTP.geet(u) } }.map(&:values)
rescue => e
  # this block will not be called
end

You can use abort_on_exception on a thread to try to circumvent this, and assuming that the abort_on_exception= code is scheduled before the error inside the thread, things should be ok. This model is somewhat finicky, but can be made to work if you can work out the details and structure things sanely.

Async

Historically, people have avoided the "one stack per request" issue in Ruby by moving towards a completely different programming model, where everything was managed using callbacks. Executing a callback doesn't require retaining the entire stack, so you can eliminate the overhead of needing multiple in-memory stacks:

post1 = EM::HTTP.get("/posts/1")
post2 = EM::HTTP.get("/posts/2")

post1.success do |value|

end

post2.success do |value|

end

To join the two requests together, you could use another abstraction, but still wind up in a callback:

EM.join(post1, post2) do |response1, response2|
  # continue with the program
end

This sucks because it has terrible ergonomics. You lose the ability to write straight-line, synchronous code, and have to think about everything in terms of callbacks. Exception handling becomes harder, because you have to handle exceptions in growing amounts of asynchronous code.

All we really want is the ability to synchronously make a bunch of HTTP requests in parallel on a single stack ("stack" here means Fiber or Thread).

Promises / Futures

We can achieve this easily if our I/O was built using primitives that could share a single stack under the hood, and we could choose when we wanted to block.

post = Thread.yield Net::HTTP.get("/posts/1")

This makes joining multiple async requests together simple with the addition of a primitive that takes any two promises and waits for both to resolve, yielding back the resolved values.

# post1 and post2 are "promises"
post1 = Net::HTTP.get("/posts/1")
post2 = Net::HTTP.get("/posts/2")

# explicit blocking call
Thread.yield Promise.join(post1, post2)

If libraries are written this way, we can easily build up groups of promises without blocking or creating threads, and then block when we're ready to wait.

You can avoid the explicit call to Thread.yield with a blocking Promise.join:

# post1 and post2 are "promises"
post1 = Net::HTTP.get("/posts/1")
post2 = Net::HTTP.get("/posts/2")

# explicit blocking call
Promise.wait(post1, post2)

To further improve ergonomics, you could also wrap the primitive methods that return promises with versions that block:

# using ! for this is just an example... you could have the primitive forms be something like promise_get,
# promise_post, etc. and the blocking forms be get, post, etc. The important point is just that the blocking
# forms are simple wrappers that yield to the scheduler, and people can use the promise forms if they want
# to compose the requests
post1 = Net::HTTP.get!("/posts/1")

Streams

Sometimes, you want to grab a bunch of input streams and pipe them into some output. Once you've done that, you don't really need stacks at all. You'd like to be able to do that work, let the scheduler manage it, and move on with life.

# non-blocking, does not use a thread
socket.pipe Net::HTTP.stream_get("/posts/1")

# move on with life

Transition

You might be thinking to yourself, "that's all well and good, but we're not going to get everyone to rewrite their I/O libraries using async primitives".

The goal is not to get people to transition right away, but to provide a plausible path to a better world in the future.

In the short-term, we could just wrap existing synchronous operations in a promise-like interface that used threads. That wouldn't eliminate the "one-stack-per-request" problem, but it would get people using a primitive that could support that model in the future.

That primitive is actually nicer than the raw thread interface, because it's a conventional way of describing some work that should be done in a non-blocking way with caught exceptions upon joining.

Once people start using the promise forms, changing the underlying API to avoid needing to use threads will provide additional performance and lower memory usage, without any change to the public API of those objects.

This provides a smooth upgrade path for the community that doesn't require everyone to rewrite everything "right now".

Let's do it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment