Skip to content

Instantly share code, notes, and snippets.

@bprashanth
Last active November 8, 2022 09:51
Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 2 You must be signed in to fork a gist
  • Save bprashanth/8160d0cf1469b4b125af95f697433934 to your computer and use it in GitHub Desktop.
Save bprashanth/8160d0cf1469b4b125af95f697433934 to your computer and use it in GitHub Desktop.

Zookeeper:

# A headless service to create DNS records
apiVersion: v1
kind: Service
metadata:
  name: zk
  labels:
    app: zookeeper
spec:
  # We really don't care about ports
  ports:
  - port: 2888
    name: peer
  - port: 3888
    name: leader-election
  # *.zk.default.svc.cluster.local
  clusterIP: None
  selector:
    app: zookeeper
---
# First quorum member
apiVersion: v1
kind: ReplicationController
metadata:
  name: zk-server1
  labels:
    app: zookeeper
spec:
  replicas: 1
  selector:
    id: zk-server1
  template:
    metadata:
      annotations:
        # s1.zk.default.svc.cluster.local
        pod.beta.kubernetes.io/hostname: s1
        pod.beta.kubernetes.io/subdomain: zk
      labels:
        app: zookeeper
        id: zk-server1
    spec:
      # Node label only needed because we're using local storage.
      # Run `kubectl label node foo role=server1` to set.
      nodeSelector:
        role: server1
      containers:
      - image: jplock/zookeeper
        imagePullPolicy: Always
        name: zookeeper
        # Replace with more robust code to grow/shrink members.
        env:
        - name: ZK_SERVER_ID
          value: "1"
        command:
          - sh
          - -c
          - "echo -e \"server.1=s1.zk.default.svc.cluster.local:2888:3888\nserver.2=s2.zk.default.svc.cluster.local:2888:3888\n\" >> /opt/zookeeper/conf/zoo.cfg; \
             mkdir -p /tmp/zookeeper; echo ${ZK_SERVER_ID} > /tmp/zookeeper/myid; \
             /opt/zookeeper/bin/zkServer.sh start-foreground"
        # For this to be useful the pod needs to land on the same node everytime,
        # this will happen because of the node label.
        volumeMounts:
        - name: datadir
          mountPath: /tmp/zookeeper
        ports:
        - containerPort: 2888
          name: peer
        - containerPort: 3888
          name: leader-election
      volumes:
      - name: datadir
        hostPath:
          path: /var/lib/zookeeper
---
# Second quorum member
apiVersion: v1
kind: ReplicationController
metadata:
  name: zk-server2
  labels:
    app: zookeeper
spec:
  replicas: 1
  selector:
    id: zk-server2
  template:
    metadata:
      annotations:
        # s2.zk.default.svc.cluster.local
        pod.beta.kubernetes.io/hostname: s2
        pod.beta.kubernetes.io/subdomain: zk
      labels:
        app: zookeeper
        id: zk-server2
    spec:
      # Node label only needed because we're using local storage.
      # Run `kubectl label node foo role=server2` to set.
      nodeSelector:
        role: server2
      containers:
      - image: jplock/zookeeper
        imagePullPolicy: Always
        name: zookeeper
        # Replace with more robust code to grow/shrink members.
        env:
        - name: ZK_SERVER_ID
          value: "2"
        command:
          - sh
          - -c
          - "echo -e \"server.1=s1.zk.default.svc.cluster.local:2888:3888\nserver.2=s2.zk.default.svc.cluster.local:2888:3888\n\" >> /opt/zookeeper/conf/zoo.cfg; \
             mkdir -p /tmp/zookeeper; echo ${ZK_SERVER_ID} > /tmp/zookeeper/myid; \
             /opt/zookeeper/bin/zkServer.sh start-foreground"
        # For this to be useful the pod needs to land on the same node everytime,
        # this will happene because of the node label.
        volumeMounts:
        - name: datadir
          mountPath: /tmp/zookeeper
        ports:
        - containerPort: 2888
          name: peer
        - containerPort: 3888
          name: leader-election
      volumes:
      - name: datadir
        hostPath:
          path: /var/lib/zookeeper

Eg:

$ kubectl label node nodename1 role=server1
$ kubectl label node nodename2 role=server2
$ kubectl create -f zookeeper.yaml

This will create a zookeeper ensemble statically configured. Killing a pod will bring it back on the same node. It will use the same hostPath as its datadir. You can also use pvcs for storage instead of hostPath.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment