Skip to content

Instantly share code, notes, and snippets.

PRODUCTION root@kafka-6c3f65f0:~# kafkacat -L -b localhost:9092
Metadata for all topics (from broker -1: localhost:9092/bootstrap):
4 brokers:
broker 1003 at ip-10-0-246-113.ec2.internal:9092
broker 1004 at ip-10-0-190-45.ec2.internal:9092
broker 1009 at ip-10-0-213-10.ec2.internal:9092
broker 1001 at ip-10-0-148-52.ec2.internal:9092
4 topics:
topic "hound-dogfood.retriever-mutation" with 1 partitions:
partition 0, leader 1001, replicas: 1001, isrs: 1001
@charity
charity / initialize.tf
Created May 19, 2016 18:31
top-level initialize.tf, symlinked into environments
## declare all the env-specific variables that are defined in *.tfvars
variable "env" { }
variable "name" { }
variable "size" { }
variable "cidr" { }
variable "instance_type" { }
variable "kafka_instance_type" { }
variable "retriever_instance_type" { }
@charity
charity / variables.tf
Created May 19, 2016 18:29
top level terraform variables.tf
### these variables should be the same across all environments
# your aws secret key and access key should be in your env variables
provider "aws" {
region = "us-east-1"
}
variable "tf_s3_bucket" { default = "hound-terraform-state" }
variable "master_state_file" { default = "base.tfstate" }
variable "prod_state_file" { default = "production.tfstate" } # TODO: make init.sh use these variables
variable "staging_state_file" { default = "staging.tfstate" }
@charity
charity / init.sh
Created May 18, 2016 20:19
terraform environment init.sh
#!/bin/bash
# Usage: ./init.sh once to initialize remote storage for this environment.
# Subsequent tf actions in this environment don't require re-initialization,
# unless you have completely cleared your .terraform cache.
#
# terraform plan -var-file=./production.tfvars
# terraform apply -var-file=./production.tfvars
tf_env="production"
@charity
charity / kafka-snippets
Created May 17, 2016 23:32
kafka snippets updated to work w ubuntu 12.04 and kafka 0.9
#!/bin/bash -xe
# requires jq 1.5 (or at least > 1.3) and kafkacat
PATH=$PATH:/usr/lib/kafka/bin
topic="hound-staging.retriever-mutation"
which kafkacat || echo 'no kafkacat found, bye!' && exit 1
which jq || echo 'no jq found, bye!' && exit 1
# make sure jq is v 1.5
# snippet from terraform/env-dev/peering.tf
# import staging state, add routes from dev to staging
resource "terraform_remote_state" "staging_state" {
backend = "s3"
config {
bucket = "${var.tf_s3_bucket}"
region = "${var.region}"
key = "${var.staging_state_file}"
}
# file name: terraform/env-staging/peering.tf
# No peering / direct connectivity between staging and prod, for safety.
resource "terraform_remote_state" "dev_state" {
backend = "s3"
config {
bucket = "${var.tf_s3_bucket}"
region = "${var.region}"
key = "${var.dev_state_file}"
}
# file name terraform/modules/aws_vpc/vpc.tf
# first create the VPC.
# Prefix resources with var.name so we can have many environments trivially
resource "aws_vpc" "mod" {
cidr_block = "${var.cidr}"
enable_dns_hostnames = "${var.enable_dns_hostnames}"
enable_dns_support = "${var.enable_dns_support}"
tags {
Name = "${var.env}_vpc"
# file name: infra/terraform/modules/aws_vpc/bastion_sg.tf
resource "aws_security_group" "bastion_ssh_sg" {
name = "bastion_ssh"
description = "Allow ssh to bastion hosts for each vpc from anywhere"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
Introducing a new series: Post-Mortem Book Reports
Dear fellow systems engineers,
Take a moment and think about the past few years in systems outages and public
post mortems.
What were your favorite outages? What are the post-mortems that you read that
stick with you, months or years or years and years later? What did you learn
from them?