Skip to content

Instantly share code, notes, and snippets.

@Qwerios
Last active July 8, 2019 09:54
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save Qwerios/0f9262e047092ce108396d43c2055b67 to your computer and use it in GitHub Desktop.
Save Qwerios/0f9262e047092ce108396d43c2055b67 to your computer and use it in GitHub Desktop.
NATS Streaming Kubernetes Statefulset
apiVersion: v1
kind: ConfigMap
metadata:
name: nats-config
namespace: default
data:
# Used to set cluster id in deployment
# Look for $(NATS_CLUSTER) usage below
NATS_CLUSTER: nats-streaming
# This is the NATS hostname to connect to
NATS_HOST: nats://nats-cluster:4222
---
apiVersion: v1
kind: Service
metadata:
name: nats-streaming-svc
labels:
app: nats-streaming-svc
spec:
ports:
# Port 4222 is exposed from the NATS cluster and service
# - port: 4222
# name: client
# protocol: TCP
#
# You can access the management urls:
# curl http://nats-streaming-svc:8222/streaming/clientsz
# curl http://nats-streaming-svc:8222/streaming/channelsz
#
- port: 8222
name: management
protocol: TCP
selector:
app: nats-streaming-stateful
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nats-streaming-stateful
spec:
serviceName: nats-streaming-svc
# Adjust as needed but no less then 3 for cluster quorum
replicas: 3
selector:
matchLabels:
app: nats-streaming-stateful
template:
metadata:
labels:
app: nats-streaming-stateful
spec:
containers:
# Management port is enabled at default 8222
- command:
- /nats-streaming-server
- -ns
- "$(NATS_HOST)"
- -cluster_id
- "$(NATS_CLUSTER)"
- -m
- "8222"
- -store
- file
- -clustered
- --cluster_node_id
- "$(POD_NAME)"
- -cluster_log_path
- /opt/stan-logs
- -dir
- /opt/stan-data
# This is the ugly part. Update this with your replica count
- --cluster_peers
- "nats-streaming-stateful-0,nats-streaming-stateful-1,nats-streaming-stateful-2"
# Do not boostrap cluster in a stateful set or you will end up with 3 masters
# Let the nodes select a leader
# - -cluster_bootstrap
# See: https://hub.docker.com/_/nats-streaming
# This docker image contains both nats and nats-streaming
# Version I used here contains nats 2.0 internally
name: nats-streaming-stateful
image: nats-streaming:0.15.1
env:
# POD name is used for node id
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NATS_CLUSTER
valueFrom:
configMapKeyRef:
name: nats-config
key: NATS_CLUSTER
- name: NATS_HOST
valueFrom:
configMapKeyRef:
name: nats-config
key: NATS_HOST
ports:
- containerPort: 4222
name: client
- containerPort: 8222
name: management
# We need 2 voume mounts
volumeMounts:
# Contains message payload data, channels, subscriptions
- name: stan-pvc-data
mountPath: /opt/stan-data
# RAFT logs for cluster support
- name: stan-pvc-logs
mountPath: /opt/stan-logs
# This check is used by kubernetes to check if the pod still alive
livenessProbe:
httpGet:
path: /
port: management
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
# This check is used by kubernetes to check if the pod is ready to run
readinessProbe:
httpGet:
path: /
port: management
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
# This is important to spread pods across nodes
# If we don't do this a node upgrade might take down more then 1 pod
# which would compromise quorum
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nats-streaming-stateful
topologyKey: kubernetes.io/hostname
# These were auto-created on my kubernetes cluser (GKE)
# Claims should persist re-deploy or upgrade
volumeClaimTemplates:
- metadata:
name: stan-pvc-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
# Adjust as needed value chosen here is arbitrary
storage: 5Gi
- metadata:
name: stan-pvc-logs
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
# Adjust as needed value chosen here is arbitrary
storage: 5Gi
@Qwerios
Copy link
Author

Qwerios commented Jul 2, 2019

This is a statefulset for a NATS streaming cluster. It requires 2 volume claims per replica due to the message data and RAFT cluster logs need to be persisted. I ended up making this setup because the NATS streaming operator does not support persistent volumes in cluster mode yet. My goal is to have this be stable enough keep the cluster available during kubernetes node upgrades.

@Qwerios
Copy link
Author

Qwerios commented Jul 8, 2019

I've had to change the setup as clustering did not work. I switched out the embedded nats to use a nats-operator based deployment. I've also removed cluster_boostrap and added peers to prevent split-brain

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment