This Gist will provide a very quick, basic installation of Logical Volume Manager for OpenShift.
- Getting Started
- Operator Installation
- Creating a Second StorageClass
- Create a StorageProfile for CNV
- Appendix A: Example LVMCluster Deployment Using Labels
- Appendix B: Useful Disk Tooling
-
Get information about the disks on the SNO deployment
NODE_NAME=$(oc get no -o name) cat <<EOF | oc debug $NODE_NAME chroot /host lsblk -o NAME,ROTA,SIZE,TYPE EOF
The output will look like the following:
sh-5.1# lsblk -o NAME,ROTA,SIZE,TYPE sh-5.1# exit NAME ROTA SIZE TYPE sda 1 1.8T disk sdb 1 5.5T disk sdc 1 7.3T disk sdd 1 3.6T disk sde 1 931G disk |-sde1 1 1M part |-sde2 1 127M part |-sde3 1 384M part `-sde4 1 930.5G part sr0 1 1024M rom Removing debug pod ...
-
Now I want you to run the following command (SNO is required, and CLI should be working already), which will list out all of the mappings between disk assignments (i.e.
/dev/sdc
) and disk by-path (i.e./dev/disk/by-path/
).NODE_NAME=$(oc get no -o name) cat <<EOF | oc debug $NODE_NAME chroot /host ls -aslc /dev/disk/by-path/ EOF
The output will look like the following:
sh-5.1# ls -aslc /dev/disk/by-path/ sh-5.1# exit total 0 0 drwxr-xr-x. 2 root root 260 Sep 7 00:30 . 0 drwxr-xr-x. 9 root root 180 Sep 7 00:30 .. 0 lrwxrwxrwx. 1 root root 9 Sep 7 00:30 pci-0000:00:17.0-ata-8 -> ../../sr0 0 lrwxrwxrwx. 1 root root 9 Sep 7 00:30 pci-0000:00:17.0-ata-8.0 -> ../../sr0 0 lrwxrwxrwx. 1 root root 9 Sep 7 00:30 pci-0000:1a:00.0-scsi-0:2:0:0 -> ../../sda 0 lrwxrwxrwx. 1 root root 9 Sep 7 00:30 pci-0000:1a:00.0-scsi-0:2:1:0 -> ../../sdc 0 lrwxrwxrwx. 1 root root 9 Sep 7 00:30 pci-0000:1a:00.0-scsi-0:2:2:0 -> ../../sdb 0 lrwxrwxrwx. 1 root root 9 Sep 7 00:30 pci-0000:1a:00.0-scsi-0:2:3:0 -> ../../sdd 0 lrwxrwxrwx. 1 root root 9 Sep 7 00:30 pci-0000:1a:00.0-scsi-0:2:4:0 -> ../../sde 0 lrwxrwxrwx. 1 root root 10 Sep 7 00:30 pci-0000:1a:00.0-scsi-0:2:4:0-part1 -> ../../sde1 0 lrwxrwxrwx. 1 root root 10 Sep 7 00:30 pci-0000:1a:00.0-scsi-0:2:4:0-part2 -> ../../sde2 0 lrwxrwxrwx. 1 root root 10 Sep 7 00:30 pci-0000:1a:00.0-scsi-0:2:4:0-part3 -> ../../sde3 0 lrwxrwxrwx. 1 root root 10 Sep 7 00:30 pci-0000:1a:00.0-scsi-0:2:4:0-part4 -> ../../sde4 Removing debug pod ...
-
Next, wipe the disk that you're planning to use for LVM. If you plan on using multiple disks, I recommend that the disks are of the same speed and type, and that they are non-rotational disks (which can be determined from the output above).
CRITICAL: STOP!! DO NOT simply copy and paste what you see below. Copy, edit for your use case/SNO environment, and then paste. For example, I am using
dev/sdc
(single disk). If you need or want to use a different disk, take your time and edit the following information accordingly.cat <<EOF | oc debug $NODE_NAME chroot /host sudo wipefs -af /dev/sdc sudo sgdisk --zap-all /dev/sdc sudo dd if=/dev/zero of=/dev/sdc bs=1M count=100 oflag=direct,dsync sudo blkdiscard /dev/sdc EOF
-
Now you can install the following LVM operator manifests. This will install the operator, but it will not install the LVM instance quite yet. We will do this in the next step (with the CR).
oc apply -f - <<EOF --- apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged name: openshift-storage --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms namespace: openshift-storage spec: installPlanApproval: Automatic name: lvms-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF
-
Next, using the information you retrieved in the previous section in Step 1 and 2, you can install the LVM CR instance. This will create our LVM storage environment.
WARNING: DO NOT simply copy and paste what you see below. Copy, edit for your use case/SNO environment, and then paste. For example, I am using
/dev/sdb
(single disk). If you need or want to use a different disk, take your time and edit the following information accordingly. If you need to add another disk, you can add another option under the line with/dev/sdb
.cat <<EOF | oc apply -f - apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: lvmcluster namespace: openshift-storage spec: storage: deviceClasses: - name: vg1 deviceSelector: paths: - /dev/disk/by-path/pci-0000:1a:00.0-scsi-0:2:1:0 fstype: xfs thinPoolConfig: chunkSizeCalculationPolicy: Static name: thin-pool-1 overprovisionRatio: 10 sizePercent: 90 EOF
NOTE: The manifest above will create a
StorageClass
calledlvms-vg1
.vg1
is taken from the field abovespec.deviceClasses.[name]
. LVM deployments prependStorageClass
objects withlvms
, so this gives us a resulting name oflvms-vg1
.
-
Now we're going to create another
StorageClass
, but in this case we're going to create a duplicate oflvms-vg1
which allows forimmediate
binding ofPersistentVolumeClaims
(or claims that are bound, regardless of a requesting workload or correspondingPersistentVolume
. This is useful in some scenarios, but it won't hurt to use this by default. Because of this, I will make thisStorageClass
default (which is going to be useful when deploying OpenShift Virtualization).cat <<EOF | oc apply -f - kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: lvms-vg1-immediate annotations: description: Provides RWO and RWOP Filesystem & Block volumes storageclass.kubernetes.io/is-default-class: 'true' storageclass.kubevirt.io/is-default-virt-class: 'true' provisioner: topolvm.io parameters: csi.storage.k8s.io/fstype: xfs topolvm.io/device-class: vg1 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: Immediate EOF
STOP: DO NOT CONTINUE UNLESS YOU HAVE ALREADY FIRST INSTALLED OPENSHIFT VIRTUALIZATION
The following is ONLY applicable AFTER Installing OpenShift Virtualization on SNO. A StorageProfile
modification is a required after installing OpenShift Virtualization if you want to bring VMs up quickly (i.e. if you notice new VMs lag when starting, it's because you haven't created the correct StorageProfile
and need to follow the instructions below). A StorageProfile
is a CR which is included as part of OpenShift Virtualization/KubeVirt. You can read more about StorageProfile
HERE.
-
With the
lvms-vg1-immediate
StorageClass deployed, you will need to create a correspondingStorageProfile
. Deploy the followingStorageProfile
for thelvms-vg1-immediate
StorageClass. (further documentation can be found HERE.cat <<EOF | oc apply -f - --- apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: lvms-vg1-immediate labels: app: containerized-data-importer app.kubernetes.io/component: storage app.kubernetes.io/managed-by: cdi-controller app.kubernetes.io/part-of: hyperconverged-cluster cdi.kubevirt.io: '' spec: cloneStrategy: snapshot EOF
Below is a sample LVMCluster that I use in my lab (just to show you a full example).
oc apply -f - <<EOF
apiVersion: lvm.topolvm.io/v1alpha1
kind: LVMCluster
metadata:
name: lvmcluster
namespace: openshift-storage
spec:
storage:
deviceClasses:
- deviceSelector:
paths:
- '/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:1:0'
fstype: xfs
name: vg1
nodeSelector:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/lvm-disk
operator: In
values:
- sdb
thinPoolConfig:
chunkSizeCalculationPolicy: Static
name: thin-pool-1
overprovisionRatio: 10
sizePercent: 90
EOF
IMPORTANT: For this to work correctly you MUST use the following label!
NODE_NAME=$(oc get no -o name)
oc label $NODE_NAME topology.kubernetes.io/lvm-disk=sdb
If you want to explore your disk speed, and other information listed above (paths, devices, etc), you can use the following script that I've created (for SNO environments): HERE