Skip to content

Instantly share code, notes, and snippets.

I've been an aggressive Kubernetes evangelist over the last few years. It has been the hammer with which I have approached almost all my deployments, and the one tool I have mentioned (shoved down clients throats) in almost all my foremost communications with clients, and it was my go to choice when I was mocking my first startup (saharacluster.com).

A few weeks ago Docker 1.13 was released and I was tasked with replicating a client's Kubernetes deployment on Swarm, more specifically testing running compose on Swarm.

And it was a dream!

All our apps were already dockerised and all I had to do was make a few modificatons to an existing compose file that I had used for testing before prior said deployment on Kubernetes.

And, with the ease with which I was able to expose our endpoints, manage volumes, handle networking, deploy and tear down the setup. I in all honesty see no reason to not use Swarm. Or any mission-critical feature, or incredibly convenient really nice to have feature in Kubernetes that I'm

I have been an aggressive Kubernetes evangelist over the last few years. It has been the hammer with which I have approached almost all my deployments, and the one tool I have mentioned (shoved down clients throats) in almost all my foremost communications with clients, and it was my go to choice when I was mocking my first startup (saharacluster.com).

A few weeks ago Docker 1.13 was released and I was tasked with replicating a client's Kubernetes deployment on Swarm, more specifically testing running compose on Swarm.

And it was a dream!

All our apps were already dockerised and all I had to do was make a few modificatons to an existing compose file that I had used for testing before prior said deployment on Kubernetes.

And, with the ease with which I was able to expose our endpoints, manage volumes, handle networking, deploy and tear down the setup. I in all honesty see no reason to not use Swarm. No mission-critical feature, or incredibly convenient really nice to have feature in Kubernetes that I'm go

@jonathan-kosgei
jonathan-kosgei / installing_calico_for_docker_networking.sh
Last active March 3, 2017 11:57
Installing Calico for Docker Networking
#!/bin/sh
# Setup Calico for Docker on Ubuntu 16.04
# Change to the internal ip of your node
NODE_IP=ip route get 8.8.8.8 | awk '{print $NF; exit}'
# Install docker
sudo apt-get install -y --no-install-recommends \
apt-transport-https \
ca-certificates \
@jonathan-kosgei
jonathan-kosgei / internal_ip.sh
Created February 25, 2017 10:50
Command to get internal ip programmatically
# From this askubuntu question http://askubuntu.com/a/604691/280044
ip route get 8.8.8.8 | awk '{print $NF; exit}'
@jonathan-kosgei
jonathan-kosgei / largest_number_in_nested_number_list.py
Last active March 5, 2017 19:02
Find largest number in nested number list Python
def max_in_nested_number_list(numbers):
largest = 0
for item in numbers:
if isinstance(item, list):
numbers.append(max_in_nested_number_list(item))
else:
largest = item if item>largest else largest
return largest
""" Make sure to create the "backup" tag on the volumes you want to backup.
For authentication, setup the aws policy and user as specified in the snapshot-trust.json and snapshot-policy.json
Inspired by: https://serverlesscode.com/post/lambda-schedule-ebs-snapshot-backups/
"""
from time import gmtime, strftime
import boto3
region = "us-west-2"
backup_tag = "backup"
@jonathan-kosgei
jonathan-kosgei / schedule_function.sh
Created April 19, 2017 03:52
AWS Lambda backup EBS volumes with BOTO3
# Scheduling the above script hourly. Ensure you're properly authenticated via aws configure.
#!/bin/bash
zip ebs-backup-worker.zip .schedule-ebs-snapshot-backups.py
aws lambda create-function --function-name ebs-backup-worker \
--runtime python2.7 \
--role "arn for your lambda user's role" \
--handler lambda_handler \
--zip-file fileb:///ebs-backup-worker.zip
FROM debian:jessie
MAINTAINER a.mulholland
RUN apt-get update && apt-get upgrade -y &&\
curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - &&\
echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list &&\
curl -sL https://deb.nodesource.com/setup_6.x | bash -
apt-get install -y nodejs yarn nginx php5-fpm php5-mysqlnd php5-curl php5-mcrypt php5-gd git curl mysql-client openssh-client
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
@jonathan-kosgei
jonathan-kosgei / thirdpartypaths.json
Created May 3, 2017 06:47
Kubernetes Third Party Path json
{
"/apis/{fqdn}/v1/{resource}": {
"get": {
"security": [
{
"Bearer": [
]
}
],
@jonathan-kosgei
jonathan-kosgei / docker_image_clean.sh
Last active July 12, 2017 13:08
Clear all images on a Docker host except base images
#!/bin/bash
# set variable with base image names
# get base image ids and set in other list
all_images=`mktemp`
base_images=`mktemp`
#base_image_names="alpine linux"
#ids=`docker images --no-trunc -q $(base_image_names)`
bases="sha256:7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560 sha256:33aa78cbda15ae84375c46dfc3fc07560c9af8e7b8f37745d2c6542e2affec9f"
docker images -q --no-trunc > $all_images