Skip to content

Instantly share code, notes, and snippets.

@divyenpatel
Last active September 28, 2017 14:00
Show Gist options
  • Save divyenpatel/a087b8e9d9d02b3b9b8818192f7a1612 to your computer and use it in GitHub Desktop.
Save divyenpatel/a087b8e9d9d02b3b9b8818192f7a1612 to your computer and use it in GitHub Desktop.
How to use vSphere backed shared NFS volume on Pods located on different Nodes

How to use vSphere backed shared NFS volume on the Pods located on different Nodes

Create vSphere backed volume for NFS server using vsphere-volume provisioner.

$ kubectl create -f nfs-server-vsphere-volume.yaml
storageclass "nfs-server-sc" created
persistentvolumeclaim "nfs-server-pvc" created

$ kubectl get pvc
NAME             STATUS    VOLUME                                     CAPACITY   ACCESSMODES   STORAGECLASS    AGE
nfs-server-pvc   Bound     pvc-048776b4-a447-11e7-ab1f-0050569c67d8   200Gi      RWO           nfs-server-sc   1m

$ kubectl get pv
NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                    STORAGECLASS    REASON    AGE
pvc-048776b4-a447-11e7-ab1f-0050569c67d8   200Gi      RWO           Delete          Bound     default/nfs-server-pvc   nfs-server-sc             1m

Create NFS Container Pod using vSphere backed Storage and service endpoint.

$ kubectl create -f nfs-server.yaml
replicationcontroller "nfs-server" created
service "nfs-server" created

$ kubectl get replicationcontroller
NAME         DESIRED   CURRENT   READY     AGE
nfs-server   1         1         1         1m

$ kubectl get pods
NAME               READY     STATUS    RESTARTS   AGE
nfs-server-b4gbj   1/1       Running   0          1m

$ kubectl describe pod nfs-server-b4gbj | grep Node
Node:		kubernetes-node3/10.192.39.148
Node-Selectors:	<none>

Create PV and PVC using NFS Service Endpoint IP

$ kubectl describe service nfs | grep Endpoints
Endpoints:		172.1.13.3:2049
Endpoints:		172.1.13.3:20048
Endpoints:		172.1.13.3:111

Update Endpoint IP in the nfs-app-data-volume.yaml and continue provisioning app data volume.

$ kubectl create -f nfs-app-data-volume.yaml 
persistentvolume "nfs-data-pv" created
persistentvolumeclaim "nfs-data-pvc" created

$ kubectl get pvc nfs-data-pvc
NAME           STATUS    VOLUME        CAPACITY   ACCESSMODES   STORAGECLASS   AGE
nfs-data-pvc   Bound     nfs-data-pv   200Gi      RWX                          1h

$ kubectl get pv nfs-data-pv
NAME          CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                  STORAGECLASS   REASON    AGE
nfs-data-pv   200Gi      RWX           Retain          Bound     default/nfs-data-pvc                            1h

Provision pods using shared PV on different nodes.

$ kubectl create -f app-pods.yaml 
pod "app-pod1" created
pod "app-pod2" created

$ kubectl get pods
NAME               READY     STATUS    RESTARTS   AGE
app-pod1           1/1       Running   0          47s
app-pod2           1/1       Running   0          46s
nfs-server-b4gbj   1/1       Running   0          18m

$ kubectl describe pod app-pod1 | grep Node
Node:		kubernetes-node2/10.192.51.180
Node-Selectors:	<none>
divyenp-m01:nfs divyenp$ kubectl describe pod app-pod2 | grep Node
Node:		kubernetes-node4/10.192.55.170
Node-Selectors:	<none>

Verify data written by both pods are available in the shared volume.

$ kubectl exec -it app-pod1 /bin/sh
/ # ls /mnt/data/
app-pod1.txt  app-pod2.txt  index.html    lost+found
/ #
apiVersion: v1
kind: Pod
metadata:
name: app-pod1
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox:1.24
command: ["/bin/sh", "-c", "echo 'hello' > /mnt/data/app-pod1.txt && chmod o+rX /mnt /mnt/data/app-pod1.txt && while true ; do sleep 2 ; done"]
volumeMounts:
- name: data
mountPath: /mnt/data
restartPolicy: Never
volumes:
- name: data
persistentVolumeClaim:
claimName: nfs-data-pvc
---
apiVersion: v1
kind: Pod
metadata:
name: app-pod2
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox:1.24
command: ["/bin/sh", "-c", "echo 'hello' > /mnt/data/app-pod2.txt && chmod o+rX /mnt /mnt/data/app-pod2.txt && while true ; do sleep 2 ; done"]
volumeMounts:
- name: data
mountPath: /mnt/data
restartPolicy: Never
volumes:
- name: data
persistentVolumeClaim:
claimName: nfs-data-pvc
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-data-pv
labels:
storage-tier: nfs-volume-on-vsphere
spec:
capacity:
storage: 200Gi
accessModes:
- ReadWriteMany
nfs:
# FIXME: use the right IP
server: 172.1.13.3
path: "/"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-data-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 200Gi
selector:
matchLabels:
storage-tier: nfs-volume-on-vsphere
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: nfs-server-sc
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: thin
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nfs-server-pvc
annotations:
volume.beta.kubernetes.io/storage-class: nfs-server-sc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Gi
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-server
spec:
replicas: 1
selector:
role: nfs-server
template:
metadata:
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: gcr.io/google_containers/volume-nfs:0.8
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: nfs-server-pvc
---
kind: Service
apiVersion: v1
metadata:
name: nfs-server
spec:
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
selector:
role: nfs-server
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment