Skip to content

Instantly share code, notes, and snippets.

View crowdmatt's full-sized avatar

Matthew Moore crowdmatt

View GitHub Profile
RED="\[\033[0;31m\]"
YELLOW="\[\033[0;33m\]"
GREEN="\[\033[0;32m\]"
LIGHT_CYAN="\[\033[1;36m\]"
BLUE="\[\033[0;34m\]"
LIGHT_RED="\[\033[1;31m\]"
LIGHT_GREEN="\[\033[1;32m\]"
WHITE="\[\033[1;37m\]"
LIGHT_GRAY="\[\033[0;37m\]"
=CELL/(1000000000.00*60*60*24)+"1/1/1970"
gsutil -m cp -r s3://my-s3-bucket gs://my-gs-bucket
@crowdmatt
crowdmatt / s3-redshift-load.rb
Created April 26, 2013 00:25
Example of loading data into Amazon Redshift using redshift database adapter.
ActiveRecord::Base.connection.execute(
"copy campaign_events from 's3://BUCKET/FILEPATHPREFIX/' credentials 'aws_access_key_id=XXX;aws_secret_access_key=XXX' emptyasnull blanksasnull fillrecord delimiter '|'"
)
---- Begin output of echo 'sinatra restart' && sleep 0 && ../shared/scripts/unicorn clean-restart ----
STDOUT: STDERR: /opt/aws/opsworks/releases/20130418090734_113/vendor/bundle/ruby/1.8/gems/chef-0.9.15.5/bin/../lib/chef/node/attribute.rb:374:in `auto_vivifiy': You tried to set a nested key, where the parent is not a hash-like object: deploy/[MYAPP]/environment/RAILS_ENV/RAILS_ENV (ArgumentError)
from /opt/aws/opsworks/releases/20130418090734_113/vendor/bundle/ruby/1.8/gems/chef-0.9.15.5/bin/../lib/chef/node/attribute.rb:280:in `get_value'
from /opt/aws/opsworks/releases/20130418090734_113/vendor/bundle/ruby/1.8/gems/chef-0.9.15.5/bin/../lib/chef/node/attribute.rb:269:in `upto'
from /opt/aws/opsworks/releases/20130418090734_113/vendor/bundle/ruby/1.8/gems/chef-0.9.15.5/bin/../lib/chef/node/attribute.rb:269:in `get_value'
from /opt/aws/opsworks/releases/20130418090734_113/vendor/bundle/ruby/1.8/gems/chef-0.9.15.5/bin/../lib/chef/node/attribute.rb:134:in `each'
from /opt/aws/opsworks/releases/20130418090734_
=SUMPRODUCT((E2:E4106<>"")/COUNTIF(E2:E4106,E2:E4106&""))
Vagrant::Config.run do |config|
config.vm.box = "squeeze64-ruby193"
config.vm.box_url = "http://packages.diluvia.net/squeeze64-ruby193.box"
config.vm.network :hostonly, "33.33.33.10"
config.vm.share_folder "v-cookbooks", "/cookbooks", "."
end
openssl rsa -in MYFILE.pem -pubout > MYFILE.pub
ssh-keygen -f MYFILE.pub -i -m PKCS8

How to setup a Go / Golang WebServer with Kafka Log Aggregation to S3

This is a work in progress.

Install Kafka

Download the latest Kafka, which you can find at: http://kafka.apache.org/downloads.html

Setting up Flume NG, listening to syslog over UDP, with an S3 Sink

My goal was to set up Flume on my web instances, and write all events into s3, so I could easily use other tools like Amazon Elastic Map Reduce, and Amazon Red Shift.

I didn't want to have to deal with log rotation myself, so I setup Flume to read from a syslog UDP source. In this case, Flume NG acts as a syslog server, so as long as Flume is running, my web application can simply write to it in syslog format on the specified port. Most languages have plugins for this.

At the time of this writing, I've been able to get Flume NG up and running on 3 ec2 instances, and all writing to the same bucket.

Install Flume NG on instances