Skip to content

Instantly share code, notes, and snippets.

@rafi
Last active March 18, 2021 11:32
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save rafi/1e6d45be9c0fdc17fc5cc430aa191f54 to your computer and use it in GitHub Desktop.
Save rafi/1e6d45be9c0fdc17fc5cc430aa191f54 to your computer and use it in GitHub Desktop.
k3d-workshop

K3D Workshop

App Case-Study

A "simple" distributed app today.

  • HTTP Rest API
  • Task Workers (Queue consumers) and Crons (Periodic jobs)
  • Web client
  • Android / iOS client
  • Persistent Storage: PostgreSQL / SQLite3 / MySQL / Schema-less
  • Caching: Redis / memcache / RAM memory
  • Queue: Kafka / RabbitMQ / Redis
  • File Storage: S3 / MinIO
  • Log Index: Elasticsearch
  • Metrics: Prometheus

Development

# docker-compose.yml
services:
  api:
    build:
      context: .
      args:
        APP_DEBUG: 'true'
        APP_ENV: 'development'
        DATABASE_URL: 'postgresql://bob:pass@db/acme'
        REDIS_HOST: cache
        RABBITMQ_HOST: queue
    image: acme/my-awesome-api
    container_name: acme-api
    volumes:
      - .:/app
    depends_on:
      - db
      - cache
      - queue

  worker:
    image: acme/my-worker
    container_name: acme-worker
    volumes:
      - .:/app
    depends_on:
      - db
      - queue

  ui:
    image: acme/my-ui
    container_name: acme-ui
    depends_on:
      - api

  db:
    image: postgres:12-alpine
    container_name: acme-db
    environment:
      POSTGRES_DB: acme
      POSTGRES_USER: bob
      POSTGRES_PASSWORD: pass

  cache:
    image: redis:5-alpine
    container_name: acme-redis

  queue:
    image: rabbitmq:3.8-alpine
    container_name: acme-rabbitmq

Development Workflow

Run 3rd-party services in background.

docker-compose pull
docker-compose up -d db cache queue

Run API/Worker/UI server within or without Docker.

Outside:

$ export DATABASE_URL="postgresql://bob:pass@db/acme"
$ export REDIS_HOST="cache" RABBITMQ_HOST="queue"

$ npm start
$ # or
$ go run .
$ # or
$ python main.py

Inside:

$ find . -name '*py' | entr -r docker-compose up api
$ # or
$ docker-compose run --rm api python src/app.py
$ # or
$ docker-compose exec api py.test -v -s

Config

Developers should not attempt to pre-configure services for all possible environments. It's impossible. Config varies substantially across deploys, code does not.

Decouple config from code-base. Apply the “Dependency Inversion” principle.


Deployment Variations

  • Integration: Latest & greatest for end-2-end testing
  • Staging: Production-like environment for stress-testing and benchmarks
  • Production: Fully-distributed public system

Why stop here?

With proper config injection you can enable full cluster spin-up in minutes.


Ease the Pain

Aliases

$ alias gf='git fetch --all'
$ alias gs='git status -sb'
$ alias gl="git log --graph --all --pretty='%C(240)%h%C(reset) -%C(auto)%d%Creset %s %C(242)(%an %ar)'"

Watch for changes

$ find . -name '*py' | entr -r docker-compose up api

entr is a utility for running arbitrary commands when files change.


Makefiles

Makefile is awesome.

DATABASE_NAME ?= acme

bash:
	docker-compose exec api bash

debug:
	docker-compose run --rm api python src/app.py

logs:
	docker-compose logs --tail 15 -f

upgrade:
	docker-compose exec api cli-utility database up

psql:
	docker-compose exec db psql -h db -U bob "$(DATABASE_NAME)"

up:
	docker-compose up -d --remove-orphans

stop:
	docker-compose stop -t 5

test:
	docker-compose exec api py.test -v -s

test-run:
	docker-compose run --rm --no-deps api py.test -v -s

watch:
	find src/ -name '*py' | entr -r docker-compose up --no-deps api

.PHONY: bash debug logs upgrade psql up stop test test-run watch

Makefiles

$ make up

> docker-compose up -d --remove-orphans

$ make test

> docker-compose exec api py.test -v -s

Don't compose build & shell scripts inside your code (e.g. package.json).


  • Install Docker
  • Install k3d (With macOS: brew install k3d)
  • Install kubectl (With macOS: brew install kubernetes-cli)
  • Install Helm 2 (With macOS: brew install helm@2)
k3d create --publish 8084:80 --server-arg "--no-deploy=traefik"

export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"

kubectl get node  # or kubectl get no
kubectl get storageclass  # or kubectl get sc
kubectl get namespace  # or kubectl get ns
kubectl get pod -A
kubectl get svc -A

# Let's install Helm 2 tiller
kubectl -n kube-system create serviceaccount tiller

kubectl create clusterrolebinding tiller \
  --clusterrole=cluster-admin \
  --serviceaccount=kube-system:tiller

helm2 init --service-account=tiller

helm2 install --name nginx-ingress stable/nginx-ingress

curl http://localhost:8084
# Should respond with 404

helm2 install --name blog stable/ghost \
    --set service.type=ClusterIP \
    --set ingress.enabled=true \
    --set persistence.enabled=false

curl http://localhost:8084 -H 'Host: ghost.local'

echo "127.0.0.1 ghost.local" >> /etc/hosts

open http://ghost.local:8084

Questions?

rafi.io

github.com/rafi

Rafael Bodill


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment