Skip to content

Instantly share code, notes, and snippets.

Avatar

Richard Schneeman schneems

View GitHub Profile
View gist:8830184
# Redirects STDOUT to `Thread.current[:stdout]` if present
$stdout.instance_eval do
alias :original_write :write
end
$stdout.define_singleton_method(:write) do |value|
if Thread.current[:stdout]
Thread.current[:stdout].write value
else
original_write value
View gist:8849116

I said

I wanted to talk about this so bad on the show, but it wasn't released yet. Yesterday we launched performance dynos https://blog.heroku.com/archives/2014/2/3/heroku-xl it basically lets you have a dedicated 6gb of RAM and 8 cores per each dyno. The idea is that if you really want to drop your tail latencies there's no getting around the need for high concurrency. By running on a dyno like this you could easily run 12x the number of Unicorn or Puma workers or more if you're using a Ruby that is copy on write friendly like 2.1.0. You can still scale out horizontally with more "performance" dynos but this is one way you can also scale vertically. Ask me your performance dyno related questions, and I'll do my best to answer them here!

He Said

Could you explain the "tail latencies" bit? I saw a bunch of people post stats from their dashboards showing the switch to PX dynos and the difference it made was really impressive, but I guess I don't understand why it made a difference. You mention you cou

View gist:8868565
class DirThreadsafe < Dir
def self.chdir(target)
Thread.current[:pwd] = target
end
end
module Kernel
alias :_backtick_original :"`"
View Gemfile
source "https://rubygems.org"
gem 'puma'
gem 'rack'
View gist:9398110
> new_jekyll_repo = Repo.find_by_full_name "jekyll/jekyll”
> old_repo = Repo.find_by_full_name "mojombo/jekyll”
> old_repo.repo_subscriptions.each {|sub| sub.repo_id = new_jekyll_repo.id ; sub.save }
View gist:9474721
rails new sprockets_template_require
cd sprockets_template_require/
mkdir app/assets/templates/
echo "FOO" > app/assets/templates/index.html
echo "//= depend_on index.html" > app/assets/javascripts/application.js
RAILS_ENV=production bundle exec rake assets:precompile
echo "bar" >> app/assets/templates/index.html
RAILS_ENV=production bundle exec rake assets:precompile
View gist:9790733

Options for S3 Direct Upload

Goal

To updload a file directly to S3 without relying on storing data on a server in between. Why? Lets say you're having your users upload 50mb files, it's wasteful and takes up server resources to have your backend sit around and store the file only to have to send it off to S3 anway. Also if your server may go down or restart while the upload is taking place, this can happen regardless of the file size. If this happens the upload is incomplete. If the file goes directly to S3, we don't have to worry about our server availability.

Options

  • s3upload.js
View gist:10025798
require 'pathname'
require 'bigdecimal'
KB_TO_BYTE = 1024 # 2**10 = 1024
MB_TO_BYTE = 1_048_576 # 1024**2 = 1_048_576
GB_TO_BYTE = 1_073_741_824 # 1024**3 = 1_073_741_824
CONVERSION = { "kb" => KB_TO_BYTE, "mb" => MB_TO_BYTE, "gb" => GB_TO_BYTE }
ROUND_UP = BigDecimal.new("0.5")
def linux_memory(file)
View gist:4ac2983f0abf3e200ec9
require 'pathname'
require 'bigdecimal'
KB_TO_BYTE = 1024 # 2**10 = 1024
MB_TO_BYTE = 1_048_576 # 1024**2 = 1_048_576
GB_TO_BYTE = 1_073_741_824 # 1024**3 = 1_073_741_824
CONVERSION = { "kb" => KB_TO_BYTE, "mb" => MB_TO_BYTE, "gb" => GB_TO_BYTE }
ROUND_UP = BigDecimal.new("0.5")
def linux_memory(file)
View gist:96cc971fd8e7f2180dd6
module Kernel
alias :old_puts :puts
def puts(val)
old_puts val
old_puts caller.inspect
end
end