Skip to content

Instantly share code, notes, and snippets.

@imbriaco
Last active May 26, 2018 08:02
Show Gist options
  • Star 4 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save imbriaco/e22e2ce27b054818f5a98646ed290a35 to your computer and use it in GitHub Desktop.
Save imbriaco/e22e2ce27b054818f5a98646ed290a35 to your computer and use it in GitHub Desktop.
Example deployment descriptor for running Cog on Kubernetes. Uses the secret store as well as persistent disks for Postgres, Cog, and Relay data.

Cog on Kubernetes

This experiment was originally built on Google Cloud's Kubernetes and makes use of gcePersistentDisk. If you are not using Google Cloud, you'll need to adjust the gcePersistentDisk volumes to use the persistent disk technology available in your cluster.

At a high level the steps look something like this, though there may be subtle errors. This experiment happened a few weeks ago and I'm creating these from memory along with the test descriptors I used.

  1. Setup Kubernetes and create the necessary persistent disks that are referenced in the deployment.
  2. Create the necessary secrets. [01-secrets.yml]
  3. Create the Cog pod. [02-deployment.yml]
  4. Add a load balancer service to allow Cog to be reachable [03-loadbalancer.yml]
  5. Boostrap Cog, setup an admin user, and configure the relay group and relay.

Notes

Secrets

  • slack-token - Slack API token for your Cog instance to use.
  • postgres-passwrd - Password that will be used by the cog admin user that is created in your Postgres container.
  • database-url - The database URL that Cog uses to talk to the Postgres container. You should only have to replace pgpassword with the value of the postgres-password key that tyou defineda bove.

Deployment

Most of the variables are self-explanatory, but a few deserve special attention.

  • COG_*_URL_HOST - The COG_API_URL_HOST, COG_SERVICE_URL_HOST, and COG_TRIGGER_URL_HOST should be configured to point to a hostname or IP address where the ports exposed by the Cog container in the pod are reachable.

Relay

  • RELAY_ID - Use uuidgen or another method to create a UUID for the Relay. Currently, these should all be lowercase.
  • RELAY_COG_TOKEN - Choose a password for the Relay to use when connecting to Cog.

Bootstrap & Configuration

Once the pod is running, you have to perform normal Cog bootstrap. In testing, I used kubectl exec to run a shell in the cog container where cogctl is available. At a minimum you'll need to do something like this to bootstrap Cog, create a new admin user, and create a new relay group and relay for the relay running in the pod.

Note, these are untested and off the top of my head, but they should be pretty close.

# Export variables to be used in steps below
export RELAY_ID=<< RELAY_ID from deployment descriptor >>
export RELAY_COG_TOKEN=<< RELAY_COG_TOKEN from deployment descriptor >>
export ADMIN_FIRST_NAME=Eliza
export ADMIN_LAST_NAME=Example
export ADMIN_EMAIL=eliza@example.com
export ADMIN_PASSWORD=secretshhh
export ADMIN_SLACK_HANDLE=elizaslack

# Bootstrap Cog
cogctl bootstrap
cat ~/.cogctl # and record these credentials

# Create a new admin user
cogctl users create --first-name ${ADMIN_FIRST_NAME} --last-name ${ADMIN_LAST_NAME} --email ${ADMIN_EMAIL} --username ${ADMIN_EMAIL} --password ${ADMIN_PASSWORD}
cogctl groups add cog-admin --email ${ADMIN_EMAIL}

# Create a relay and relay group named "local"
cogctl relay-groups create local
cogctl relays create local --id ${RELAY_ID} --token ${RELAY_COG_TOKEN} --groups local --enable
apiVersion: v1
kind: Secret
metadata:
name: cog-demo
type: Opaque
data:
slack-token: slacktoken
postgres-password: pgpassword
database-url: ecto://cog:pgpassword@localhost:5432/cog
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: cog-demo
spec:
replicas: 1
template:
metadata:
labels:
app: cog-demo
spec:
containers:
- name: postgres
image: postgres:9.5
ports:
- containerPort: 5432
env:
- name: POSTGRES_USER
value: cog
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: cog-demo
key: postgres-password
- name: PGDATA
value: "/data/postgres"
volumeMounts:
- name: cog-demo-pgdata
mountPath: "/data"
- name: cog
image: operable/cog:0.7.5
imagePullPolicy: Always
ports:
- containerPort: 4000
- containerPort: 4001
- containerPort: 4002
- containerPort: 1883
env:
- name: SLACK_API_TOKEN
valueFrom:
secretKeyRef:
name: cog-demo
key: slack-token
- name: MIX_ENV
value: prod
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: cog-demo
key: database-url
- name: COG_ADAPTER
value: slack
- name: COG_MQTT_HOST
value: 0.0.0.0
- name: COG_MQTT_PORT
value: '1883'
- name: COG_API_URL_HOST
value: cog-demo.example.com
- name: COG_SERVICE_URL_HOST
value: cog-demo.example.com
- name: COG_TRIGGER_URL_HOST
value: cog-demo.example.com
volumeMounts:
- name: cog-demo-data
mountPath: "/data"
command:
- scripts/docker-start
- name: relay
image: operable/relay:0.7.5
imagePullPolicy: Always
securityContext:
privileged: true
env:
- name: RELAY_ID
value: 9451b7eb-2642-477d-b837-55da340ca272
- name: RELAY_COG_TOKEN
value: mytoken
- name: RELAY_DYNAMIC_CONFIG_ROOT
value: /data/relay/configs
- name: RELAY_COG_REFRESH_INTERVAL
value: 30s
- name: RELAY_LOG_LEVEL
value: debug
volumeMounts:
- name: cog-demo-data
mountPath: "/data"
- name: docker-socket
mountPath: "/var/run/docker.sock"
command:
- /usr/local/bin/relay
volumes:
- name: docker-socket
hostPath:
path: /var/run/docker.sock
- name: cog-demo-data
gcePersistentDisk:
pdName: cog-demo-data
fsType: ext4
- name: cog-demo-pgdata
gcePersistentDisk:
pdName: cog-demo-pgdata
fsType: ext4
---
apiVersion: v1
kind: Service
metadata:
name: cog-demo
spec:
ports:
- name: cog-api
port: 80
targetPort: 4000
- name: cog-trigger-api
port: 4001
targetPort: 4001
- name: cog-service-api
port: 4002
targetPort: 4002
selector:
app: cog-demo
type: LoadBalancer
@so0k
Copy link

so0k commented Nov 8, 2016

Here's an alternative which separates the cog pod from the relay pod (so they may run across a cluster)

https://gist.github.com/so0k/f4308160a9a2e749aa0b90715288e08b

It also sets up Postgres external to the cluster (with persistence and backups managed separately)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment