Instantly share code, notes, and snippets.

View heroku logstash config
input {
  # translate syslog messages into logstash events
  # with priority field, fields added by SYSLOGLINE pattern
  # (e.g. timestamp, logsource, program, pid, etc.) and the 
  # rest of the syslog string in the message field
  syslog {
    # port => 514
    # codec => plain
    # syslog_field => "message"
View generate aes key.rb
require 'openssl'
require 'base64'
cipher ="aes-256-cbc")
key = cipher.random_key
value = Base64.strict_encode64(key)
allocation = {
events: 3_131_647.7,
fanout: 1_234_271.5,
default: 17_179_872,
detonation: 705_552.8,
View encryption.rb
require 'openssl'
require 'base64'
module Cipher
extend self
def encrypt(plaintext)
View cassandra counter

RF=3, Ring=3, Up=3


  Execute CQL3 query | 2017-12-15 09:30:45.114000 | |              0
            Parsing update count_by_id SET count = count + 1 WHERE id = 1; [SharedPool-Worker-1] | 2017-12-15 09:30:45.114000 | |            320
                                                       Preparing statement [SharedPool-Worker-1] | 2017-12-15 09:30:45.120000 | |           5772
                                 Executing single-partition query on local [SharedPool-Worker-1] | 2017-12-15 09:30:45.172000 | |          57252
                                              Acquiring sstable references [SharedPool-Worker-1] | 2017-12-15 09:30:45.172000 | |          57735
View yield percentage in MRI

MRI GIL isn't pure evil, a.k.a. Native Threads Aren't Magic

The GIL in MRI does not inherintly mean inefficient concurrency. Likewise, using "native threads" does not inherintly mean efficient concurrency.

To be clear, modern ruby (>=1.9) does use native threads. The key is that GIL locking ensures that only 1 thread is executing at a time (regardless of the number of CPU cores available), per process. So, MRI threads are native threads, but to draw the distinction, we'll use the terms "GIL thread" and "native thread" below.


Concurrency is not about magically getting two threads to execute at the same time on a single core (because that would be magic). It is about efficiently managing thread scheduling so the maximum amount of CPU cycles is spent doing work.

View cassandra latency jmx logstash filter.rb
### Stub for a logstash filter event
class Event
def initialize(initial_data=nil)
@data = initial_data || {}
def get(key)
View cassandra reconnection and
rails c
while true do
  r = nil
  duration = do
    r = Cassie.session.execute("select count(*) from system.hints")
View sidekiq_local_processes_stats.rb
# Emit key statistics about locally running Sidekiq processes to a stream.
require 'sidekiq/api'
class SidekiqProcessesPrinter
attr_reader :format
def initialize(opts={})
@format = opts.fetch(:format){:plain}
View sidekiq_shared_queues_info.rb
# Emit key statistics about Sidekiq queues to a stream.
require 'sidekiq/api'
class SidekiqQueuesPrinter
attr_reader :format
def initialize(opts={})
@format = opts.fetch(:format){:plain}