View gist:97bb1ef255024aef79b62e89659198b7
cd /home/andrew/Desktop/projects/github/photobox/services-operations/aws/sparkleformation;
sfn create test --debug
[Sfn]: Please select an entry:
1. Default
2. Stack Dns
3. Service Boxtop
4. Service Hpp Listen
5. Service Hpp
6. Service Scriptsbox

Keybase proof

I hereby claim:

  • I am ajohnstone on github.
  • I am ajohnstone ( on keybase.
  • I have a public key whose fingerprint is 3657 5228 12EA F1FF 9A2E B738 1B09 88E9 DD22 D552

To claim this, I am signing this object:

START="$(date +'%Y-%m-%dT%H:%M:%S' --date '-5 minutes')";
END="$(date +'%Y-%m-%dT%H:%M:%S')";
aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[].LoadBalancerName' | while read LB; do
DATA=$(aws cloudwatch get-metric-statistics \
--namespace AWS/ELB \
--metric-name "RequestCount" \
--dimensions '[{"Name":"LoadBalancerName","Value":"'$LB'"}]' \
--start-time "$START" \
--end-time "$END" \
--period 60 \
  1. What is the full DNS name for the ELB(s) that require manual scaling?
  2. Event start date/time (and timezone). If traffic has already started, is the lack of this prewarm causing impact to a live application?
  3. What is the end date/time/timezone of your event?
  4. The expected requests per second that you are anticipating your load balancer will receive during this event, as well as an indication over what period of time the traffic will ramp up to this level from current levels. (For example, we expect 15,000 rps, which will increase from the current 1,000 requests per second over a period of 30 minutes.)
  5. For this ELB, what is the average size of an HTTP request in bytes, as well as the average size of an HTTP response in Bytes. If you enable your ELB access logs [2] you can calculate this for a period of time by calculating the average of "received_bytes" and "sent_bytes" column once you have logs for a period .
  6. Number of Availability Zones enabled on your ELB. Before a pre-warm can be applied
View gist:ad267f92d93d8e5e0a8a9fbbe30681c0
aws route53 list-resource-record-sets --hosted-zone-id=/hostedzone/Z2KVJKNS01RHGO | jq -c '.ResourceRecordSets[] | select(.Name | contains("jenkins"))' | while read -r line; do
read NAME DNS_NAME HOSTED_ZONE_ID HEALTH <<<$(echo "$line" | jq -r '.Name + " " + .AliasTarget.DNSName + " " + .AliasTarget.HostedZoneId + " " + (.AliasTarget.EvaluateTargetHealth | tostring)');
"Changes": [
"Action": "DELETE",
"ResourceRecordSet": {
"Name": "${NAME}",
"Type": "A",
View gist:232ed3d51a69128e79c295e96ee71042

The kubernetes cluster has nodes of m3.medium and only have a emphemeral storage capacity of 4gb.

This is easily utilized, and therefore need to increase the total size.

I've allocated the following so technically the LVM group could utilise the following. However its unclear how to automatically get it to expand the volume.

export MASTER_DISK_TYPE='gp2';
View gist:d10a17e51ca808ca82733f8f307de297
function kubernetes::deployment::wait {
k_cmd="kubectl --namespace=$ns get deployments $deployment";
while true; do
observed=$($k_cmd -o 'jsonpath={.status.observedGeneration}');
generated=$($k_cmd -o 'jsonpath={.metadata.Generation}');
[ "$?" -ne 0 ] && break;
[ "${observed}" -ge "${generated}" ] && {
updated_replicas=$($k_cmd -o 'jsonpath={.status.updatedReplicas}');
View ingress-pod
$ kubectl exec --tty -i nginx-ingress-controller-9xccu -- ls -alh --color
total 6.2M
drwxr-xr-x 46 root root 4.0K May 1 12:47 .
drwxr-xr-x 46 root root 4.0K May 1 12:47 ..
-rwxr-xr-x 1 root root 0 May 1 12:46 .dockerenv
-rwxr-xr-x 1 root root 0 May 1 12:46 .dockerinit
drwxr-xr-x 2 root root 4.0K Apr 28 00:50 bin
drwxr-xr-x 2 root root 4.0K Nov 27 13:59 boot
drwxr-xr-x 5 root root 380 May 1 12:46 dev
drwxr-xr-x 45 root root 4.0K May 1 12:46 etc
import boto3
r53_client = boto3.client('route53')
hosted_zone = ''
def lambda_handler(event = {}, context = {}):
aws_region = event['detail']['awsRegion']
elb_client = boto3.client('elb', region_name=aws_region)
export KUBE_AWS_ZONE=eu-west-1c
export NUM_NODES=4
export MASTER_SIZE=m3.xlarge
export NODE_SIZE=m3.medium
export AWS_S3_REGION=eu-west-1
export AWS_S3_BUCKET=photobox-kubernetes-artifacts
export MULTIZONE=1;
export AWS_SSH_KEY=~/.ssh/photobox-eu