Skip to content

Instantly share code, notes, and snippets.

@oshoval
Last active October 4, 2021 10:14
Show Gist options
  • Save oshoval/76d9ce725497f2e336177fd793f932e1 to your computer and use it in GitHub Desktop.
Save oshoval/76d9ce725497f2e336177fd793f932e1 to your computer and use it in GitHub Desktop.
Using NFS as VM PVC

In order to create a VM that uses a NFS PVC, do the following:

  1. Create a NFS storage class kubectl create -f 01-nfs-sc.yaml
  2. Create the NFS server kubectl create -f 02-nfs-server.yaml
  3. Create the NFS service kubectl create -f 03-nfs-service.yaml
  4. Get the NFS service ip, and update 04-nfs-pv.yaml with it
    SERVICE_IP=$(kubectl get service nfs-service --no-headers | awk '{print $3}')
    sed s/NFS_SERVICE_IP/$SERVICE_IP/ 04-nfs-pv.yaml | kubectl create -f -
    For each time you use a PV, create a new PV manually by changing its name and path.
    Recycle old used pv, by deleting their disk.img in the nfs-server pod rm -rf /data/nfs/disk%d/disk.img
  5. Optional - you can validate it works, by creating a dv using 05-example-dv.yaml
    (don't forget to update the EXAMPLE_URL to whatever URL that will be pulled)
  6. Update the VM 06-vm_rhel6_e1000e.yaml RHEL6_QCOW_IMAGE_URL, and create it.
    Console using the credentials that appear in the cloud-init.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Delete
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-data
spec:
storageClassName: local
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-server
labels:
app: "nfs-server"
cdi.kubevirt.io/testing: ""
spec:
selector:
matchLabels:
app: nfs-server
replicas: 1
template:
metadata:
labels:
app: nfs-server
cdi.kubevirt.io/testing: ""
spec:
containers:
- image: quay.io/awels/nfs-server-alpine:12
imagePullPolicy: IfNotPresent
name: nfs-server
env:
- name: SHARED_DIRECTORY
value: /data/nfs
securityContext:
privileged: true
volumeMounts:
- mountPath: "/data/nfs"
name: nfsdata
command: ["/bin/bash", "-c"]
args:
- |
chmod 777 /data/nfs;
mkdir /data/nfs/disk1;
chmod 777 /data/nfs/disk1;
mkdir /data/nfs/disk2;
chmod 777 /data/nfs/disk2;
mkdir /data/nfs/disk3;
chmod 777 /data/nfs/disk3;
mkdir /data/nfs/disk4;
chmod 777 /data/nfs/disk4;
mkdir /data/nfs/disk5;
chmod 777 /data/nfs/disk5;
mkdir /data/nfs/disk6;
chmod 777 /data/nfs/disk6;
mkdir /data/nfs/disk7;
chmod 777 /data/nfs/disk7;
mkdir /data/nfs/disk8;
chmod 777 /data/nfs/disk8;
mkdir /data/nfs/disk9;
chmod 777 /data/nfs/disk9;
mkdir /data/nfs/disk10;
chmod 777 /data/nfs/disk10;
/usr/bin/nfsd.sh
volumes:
- name: nfsdata
persistentVolumeClaim:
claimName: nfs-data
kind: Service
apiVersion: v1
metadata:
name: nfs-service
spec:
selector:
app: nfs-server
ports:
# Open the ports required by the NFS server
# Port 2049 for TCP
- name: tcp-2049
port: 2049
protocol: TCP
# Port 111 for UDP
- name: udp-111
port: 111
protocol: UDP
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: nfs-pv1
spec:
storageClassName: nfs
capacity:
storage: "30Gi"
accessModes:
- ReadWriteMany
- ReadWriteOnce
nfs:
path: /disk1/
server: NFS_SERVICE_IP
persistentVolumeReclaimPolicy: Delete
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: simple-dv-nfs
spec:
source:
http:
url: EXAMPLE_URL
pvc:
storageClassName: nfs
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
annotations:
vm.kubevirt.io/validations: |
[
{
"name": "minimal-required-memory",
"path": "jsonpath::.spec.domain.resources.requests.memory",
"rule": "integer",
"message": "This VM requires more memory.",
"min": 536870912
}
]
name.os.template.kubevirt.io/rhel6.10: Red Hat Enterprise Linux 6.0 or higher
labels:
app: rhel-6-e1000
vm.kubevirt.io/template: rhel6-server-tiny
vm.kubevirt.io/template.revision: '1'
vm.kubevirt.io/template.version: v0.16.0
os.template.kubevirt.io/rhel6.10: 'true'
flavor.template.kubevirt.io/tiny: 'true'
workload.template.kubevirt.io/server: 'true'
vm.kubevirt.io/template.namespace: openshift
name: rhel-6-e1000
spec:
dataVolumeTemplates:
- metadata:
name: rhel-6-e1000-rootdisk-j039c
spec:
pvc:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
storageClassName: nfs
volumeMode: Filesystem
source:
http:
url: >-
RHEL6_QCOW_IMAGE_URL
running: false
template:
metadata:
annotations:
vm.kubevirt.io/flavor: tiny
vm.kubevirt.io/os: rhel6
vm.kubevirt.io/workload: server
labels:
kubevirt.io/domain: rhel-6-e1000
kubevirt.io/size: tiny
vm.kubevirt.io/name: rhel-6-e1000
os.template.kubevirt.io/rhel6.10: 'true'
flavor.template.kubevirt.io/tiny: 'true'
workload.template.kubevirt.io/server: 'true'
spec:
domain:
cpu:
cores: 1
sockets: 1
threads: 1
devices:
disks:
- disk:
bus: virtio
name: cloudinitdisk
- bootOrder: 1
disk:
bus: virtio
name: rootdisk
interfaces:
- macAddress: ''
masquerade: {}
model: e1000e
name: default
rng: {}
useVirtioTransitional: true
machine:
type: q35
resources:
requests:
memory: 1Gi
#evictionStrategy: LiveMigrate
hostname: rhel-6-e1000
networks:
- name: default
pod: {}
terminationGracePeriodSeconds: 180
volumes:
- cloudInitNoCloud:
userData: |
#cloud-config
user: cloud-user
password: redhat
chpasswd: { expire: false }
name: cloudinitdisk
- dataVolume:
name: rhel-6-e1000-rootdisk-j039c
name: rootdisk
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment