Skip to content

Instantly share code, notes, and snippets.

mongodata3:SECONDARY> c.find({ $query: {ppoi_0: { $nearSphere: [ -74.5287, 40.1301 ], $maxDistance: 0.02601798524805497 } , deviceType: "ios", channels: { $in: [ "featured_coupons", "special_sales" ] }, appVersion: { $in: [ "3.0", "3.0.1", "3.1", "3.5" ] } }}).explain()
{
"cursor" : "BasicCursor",
"isMultiKey" : false,
"n" : 0,
"nscannedObjects" : 6484024,
"nscanned" : 6484024,
"nscannedObjectsAllPlans" : 6484024,
"nscannedAllPlans" : 6484024,
"scanAndOrder" : false,
Failure is Not Optional -- it will happen
http://highscalability.com/blog/2010/10/15/troubles-with-sharding-what-can-we-learn-from-the-foursquare.html
https://web.archive.org/web/20110209190434/http://blog.foursquare.com/2010/10/05/so-that-was-a-bummer/
https://www.joyent.com/blog/postmortem-for-outage-of-us-east-1-may-27-2014
https://aws.amazon.com/message/5467D2/
http://perfcap.blogspot.com/2012/11/cloud-outage-reports.html
Introducing a new series: Post-Mortem Book Reports
Dear fellow systems engineers,
Take a moment and think about the past few years in systems outages and public post mortems.
What were your favorite outages? What are the post-mortems that you read that stick with you, months or years or years and years later? What did you learn from them?
If you are in AWS us-east-1, you probably think back to the Christmas Eve outage of 2013 or the long string of EBS outages. If you were an early user of mongo sharding, I'm betting the 4sq mongo outage is etched into your brain. If you run physical data centers and run your own networking or experience lots of DDOS attempts, GitHub post mortems are probably high on your list.
@charity
charity / kafka-snippets
Created May 17, 2016 23:32
kafka snippets updated to work w ubuntu 12.04 and kafka 0.9
#!/bin/bash -xe
# requires jq 1.5 (or at least > 1.3) and kafkacat
PATH=$PATH:/usr/lib/kafka/bin
topic="hound-staging.retriever-mutation"
which kafkacat || echo 'no kafkacat found, bye!' && exit 1
which jq || echo 'no jq found, bye!' && exit 1
# make sure jq is v 1.5
PRODUCTION root@kafka-6c3f65f0:~# kafkacat -L -b localhost:9092
Metadata for all topics (from broker -1: localhost:9092/bootstrap):
4 brokers:
broker 1003 at ip-10-0-246-113.ec2.internal:9092
broker 1004 at ip-10-0-190-45.ec2.internal:9092
broker 1009 at ip-10-0-213-10.ec2.internal:9092
broker 1001 at ip-10-0-148-52.ec2.internal:9092
4 topics:
topic "hound-dogfood.retriever-mutation" with 1 partitions:
partition 0, leader 1001, replicas: 1001, isrs: 1001
Introducing a new series: Post-Mortem Book Reports
Dear fellow systems engineers,
Take a moment and think about the past few years in systems outages and public
post mortems.
What were your favorite outages? What are the post-mortems that you read that
stick with you, months or years or years and years later? What did you learn
from them?
@charity
charity / gist:d216810052c8cac23605
Created February 17, 2016 21:32
How to get all aws account limits.
$ for svc in $(aws list 3>&1 1>&2 2>&3 3>&- | sed -e '1,7d' |sed -e 's/\|//g') ; do aws $svc describe-account-attributes 2>/dev/null || echo "not supported for $svc" ; done
@charity
charity / initialize.tf
Created May 19, 2016 18:31
top-level initialize.tf, symlinked into environments
## declare all the env-specific variables that are defined in *.tfvars
variable "env" { }
variable "name" { }
variable "size" { }
variable "cidr" { }
variable "instance_type" { }
variable "kafka_instance_type" { }
variable "retriever_instance_type" { }
@charity
charity / variables.tf
Created May 19, 2016 18:29
top level terraform variables.tf
### these variables should be the same across all environments
# your aws secret key and access key should be in your env variables
provider "aws" {
region = "us-east-1"
}
variable "tf_s3_bucket" { default = "hound-terraform-state" }
variable "master_state_file" { default = "base.tfstate" }
variable "prod_state_file" { default = "production.tfstate" } # TODO: make init.sh use these variables
variable "staging_state_file" { default = "staging.tfstate" }
# file name terraform/modules/aws_vpc/vpc.tf
# first create the VPC.
# Prefix resources with var.name so we can have many environments trivially
resource "aws_vpc" "mod" {
cidr_block = "${var.cidr}"
enable_dns_hostnames = "${var.enable_dns_hostnames}"
enable_dns_support = "${var.enable_dns_support}"
tags {
Name = "${var.env}_vpc"