Skip to content

Instantly share code, notes, and snippets.

@igrigorik
Created July 6, 2012 08:01
Show Gist options
  • Save igrigorik/3058839 to your computer and use it in GitHub Desktop.
Save igrigorik/3058839 to your computer and use it in GitHub Desktop.
Example of early head flush on load time
<!DOCTYPE html>
<html>
<head>
<meta charset=utf-8 />
<title>Hello</title>
<link rel="stylesheet" href="styles.css">
</head>
<body>
Hello World
</body>
</html>
require 'goliath'
class Delay < Goliath::API
def response(env)
case env['REQUEST_PATH']
when /css$/ then
# simulate slow (1s delay) stylesheet
EM::Synchrony.sleep(1)
return [200, {'Content-Type' => 'text/css'}, 'body { color: red }']
else
data = File.open('file.html')
# flush the HTML head (140 bytes) immediately after the headers
# this will allow the preloader to request the stylesheet before
# the rest of the page completes
EM.add_timer(0.01) do
env.chunked_stream_send(data.read(140))
end
# flush remainder of document after 1.5s
EM.add_timer(1.5) do
env.chunked_stream_send(data.read.to_s)
env.chunked_stream_close
end
# flush 200 response immediately
chunked_streaming_response(200, {'Content-Type' => 'text/html'})
end
end
end
@fxn
Copy link

fxn commented Jul 6, 2012

Yeah, browsers process nodes on the fly and fire requests for assets as their respective nodes are parsed. I did some research on this back when streaming was added to Rails. Wrote a post here http://weblog.rubyonrails.org/2011/4/18/why-http-streaming. And also have a talk with much more information in Spanish (http://vimeo.com/37688380).

BTW that behavior does not depend on whether the response is streamed, nodes are process on the fly no matter how the bytes arrive to the browser, streamed or not. The point, as you observe, is that if some get earlier, that fires parallel requests so the overall responsiveness is better.

Of course with standard cache practices for assets, the benefit of that is mostly targeted at a first request.

@igrigorik
Copy link
Author

@fxn: Yup. Quick question: did automatic flushing (http://yehudakatz.com/2010/09/07/automatic-flushing-the-rails-3-1-plan/) ever make it into Rails 3.1?

@fxn
Copy link

fxn commented Jul 8, 2012

Yes, it is automatic. The layout gets chunked, and the template main content goes in one single chunk. Basically, each time you switch from hard-coded content to dynamic content in ERB you get a chunk. In a template you can still use a kind of content_for helper called provide (the layout communicates with the template via fibers in this case).

Server-side support is also interesting. You need a 1.1 proxy, Rails disables streaming if the client sends a 1.0 request because chunked responses are a 1.1 feature. Also, you obviously do not want your reverse proxy to buffer the response as nginx does by default, at the price of... well not having the response rapidly consumed.

Then, the reverse proxy should be able to compress, since you want compression in a production environment. In order to compress, the reverse proxy must dechunk, compress, and chunk again, on the fly.

It is a very interesting topic with many practical gotchas in production environments. When I researched, from the setups I tested this the only combo that worked was Apache + Unicorn, a combo that is indeed not recommended by Unicorn. By "worked" I mean it didn't buffer and was able to compress on the fly.

@wolfwifee
Copy link

wolfwifee commented May 4, 2016

Hi there, in terms of the Jekyll site, wondering if is there an equivalent way to do the flush after the header tag as well, something like Flush the Buffer Early. Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment