Beta support for raw block volumes is available in K8s 1.13: https://kubernetes.io/blog/2019/03/07/raw-block-volume-support-to-beta/
The following in-tree volumes types support raw blocks:
- AWS EBS
- Azure Disk
- Cinder
- Fibre Channel
- GCE PD
- iSCSI
- Local volumes
- RBD (Ceph)
- Vsphere
For testing raw block support with Kata, I am simply using Local volumes.
Additional documentation on setting up local persistent volumes: https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/
The following setup assumes K8s 1.13 cluster and Kata-Containers configured as a Runtime Class:
StorageClass must be created with the volumeBindingMode set to “WaitForFirstConsumer”
$ cat local-storage.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
$ kubectl apply -f local-storage.yaml
$ kubectl get StorageClass
NAME PROVISIONER AGE
local-storage kubernetes.io/no-provisioner 49m
Since I was running the test inside a VM, I simply created an image and mounted it on a loop device.
$ sudo truncate ~/disk.img --size 500M
$ sudo mkfs -t ext4 ~/disk.img
$ sudo /sbin/losetup /dev/loop4 ~/disk.img
# Creating a symlink to /mnt/disks/vol1, not required but I setup the persistent volume later using /mnt/disk/vol1, passing /dev/loop# should have been ok as well
$ ln -s /dev/loop4 /mnt/disks/vol1
Now that local block storage is set up, create a PersistentVolume for it.
Note the volumeMode
has been set to Block
for volume to be used as a raw block device.
$ cat local-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-local-pv
spec:
capacity:
storage: 500M
volumeMode: Block
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/disks/vol1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- clr-02
$ kubectl apply -f local-pv.yaml
$ kubectl get PersistentVolume
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
example-local-pv 500M RWO Retain Bound default/example-local-claim local-storage 47m
Here "clr-02" is the k8s node on which the blcok device has been setup.
Create PersistentVolumeClaim
specifying the StorageClassName
previously created and specifying volumeMode
as Block
.
$ cat local-pv-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: example-local-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-storage
volumeMode: Block
resources:
requests:
storage: 500M
$ kubectl apply -f local-pv-claim.yaml
cat pod-pvc.yaml
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
runtimeClassName: kata
containers:
- name: my-container
image: debian
command:
- sleep
- "3600"
volumeDevices:
- devicePath: /dev/xda
name: my-volume
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
add: ["SYS_ADMIN"]
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: example-local-claim
The SYS_ADMIN
capability is not required, I have added it to test the device is passed correctly by mounting it inside the container.
$ kubectl apply -f pod-pvc.yaml
$ #PVC will show as bound:
$ kubectl get PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
example-local-claim Bound example-local-pv 500M RWO local-storage 53m
$ kubectl exec -it my-pod bash
root@my-pod:/# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 8G 0 disk
sdb 8:16 0 8G 0 disk /
sdc 8:32 0 500M 0 disk /mnt
pmem0 259:0 0 512M 0 disk
`-pmem0p1 259:1 0 510M 0 part
root@my-pod:/#
root@my-pod:/# ls -la /dev/xda
b--------- 1 root disk 8, 32 Mar 15 23:27 /dev/xda
root@my-pod:/# mount /dev/xda /mnt
root@my-pod:/# ls /mnt
hello lost+found