Skip to content

Instantly share code, notes, and snippets.

@v1k0d3n
Last active December 10, 2024 20:15
Show Gist options
  • Save v1k0d3n/fd43730844e2a52f465e02ac344ff9e8 to your computer and use it in GitHub Desktop.
Save v1k0d3n/fd43730844e2a52f465e02ac344ff9e8 to your computer and use it in GitHub Desktop.
Installing the LVM operator on SNO (OpenShift v4.17.x)

Installation of LVM on OpenShift v4.17.x

This Gist will provide a very quick, basic installation of Logical Volume Manager for OpenShift.

Table of Contents

Getting Started

  1. Get information about the disks on the SNO deployment

    NODE_NAME=$(oc get no -o name)
    
    cat <<EOF | oc debug $NODE_NAME
    chroot /host
    lsblk -o NAME,ROTA,SIZE,TYPE
    EOF

    The output will look like the following:

    sh-5.1# lsblk -o NAME,ROTA,SIZE,TYPE
    sh-5.1# exit
    NAME   ROTA   SIZE TYPE
    sda       1   1.8T disk
    sdb       1   5.5T disk
    sdc       1   7.3T disk
    sdd       1   3.6T disk
    sde       1   931G disk
    |-sde1    1     1M part
    |-sde2    1   127M part
    |-sde3    1   384M part
    `-sde4    1 930.5G part
    sr0       1  1024M rom
    
    Removing debug pod ...
  2. Now I want you to run the following command (SNO is required, and CLI should be working already), which will list out all of the mappings between disk assignments (i.e. /dev/sdc) and disk by-path (i.e. /dev/disk/by-path/).

    NODE_NAME=$(oc get no -o name)
    
    cat <<EOF | oc debug $NODE_NAME
    chroot /host
    ls -aslc /dev/disk/by-path/
    EOF

    The output will look like the following:

    sh-5.1# ls -aslc /dev/disk/by-path/
    sh-5.1# exit
    total 0
    0 drwxr-xr-x. 2 root root 260 Sep  7 00:30 .
    0 drwxr-xr-x. 9 root root 180 Sep  7 00:30 ..
    0 lrwxrwxrwx. 1 root root   9 Sep  7 00:30 pci-0000:00:17.0-ata-8 -> ../../sr0
    0 lrwxrwxrwx. 1 root root   9 Sep  7 00:30 pci-0000:00:17.0-ata-8.0 -> ../../sr0
    0 lrwxrwxrwx. 1 root root   9 Sep  7 00:30 pci-0000:1a:00.0-scsi-0:2:0:0 -> ../../sda
    0 lrwxrwxrwx. 1 root root   9 Sep  7 00:30 pci-0000:1a:00.0-scsi-0:2:1:0 -> ../../sdc
    0 lrwxrwxrwx. 1 root root   9 Sep  7 00:30 pci-0000:1a:00.0-scsi-0:2:2:0 -> ../../sdb
    0 lrwxrwxrwx. 1 root root   9 Sep  7 00:30 pci-0000:1a:00.0-scsi-0:2:3:0 -> ../../sdd
    0 lrwxrwxrwx. 1 root root   9 Sep  7 00:30 pci-0000:1a:00.0-scsi-0:2:4:0 -> ../../sde
    0 lrwxrwxrwx. 1 root root  10 Sep  7 00:30 pci-0000:1a:00.0-scsi-0:2:4:0-part1 -> ../../sde1
    0 lrwxrwxrwx. 1 root root  10 Sep  7 00:30 pci-0000:1a:00.0-scsi-0:2:4:0-part2 -> ../../sde2
    0 lrwxrwxrwx. 1 root root  10 Sep  7 00:30 pci-0000:1a:00.0-scsi-0:2:4:0-part3 -> ../../sde3
    0 lrwxrwxrwx. 1 root root  10 Sep  7 00:30 pci-0000:1a:00.0-scsi-0:2:4:0-part4 -> ../../sde4
    
    Removing debug pod ...
  3. Next, wipe the disk that you're planning to use for LVM. If you plan on using multiple disks, I recommend that the disks are of the same speed and type, and that they are non-rotational disks (which can be determined from the output above).

    CRITICAL: STOP!! DO NOT simply copy and paste what you see below. Copy, edit for your use case/SNO environment, and then paste. For example, I am using dev/sdc (single disk). If you need or want to use a different disk, take your time and edit the following information accordingly.

    cat <<EOF | oc debug $NODE_NAME
    chroot /host
    sudo wipefs -af /dev/sdc
    sudo sgdisk --zap-all /dev/sdc
    sudo dd if=/dev/zero of=/dev/sdc bs=1M count=100 oflag=direct,dsync
    sudo blkdiscard /dev/sdc
    EOF

Operator Installation

  1. Now you can install the following LVM operator manifests. This will install the operator, but it will not install the LVM instance quite yet. We will do this in the next step (with the CR).

    oc apply -f - <<EOF
    ---
    apiVersion: v1
    kind: Namespace
    metadata:
      labels:
        openshift.io/cluster-monitoring: "true"
        pod-security.kubernetes.io/enforce: privileged
        pod-security.kubernetes.io/audit: privileged
        pod-security.kubernetes.io/warn: privileged
      name: openshift-storage
    
    ---
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: openshift-storage-operatorgroup
      namespace: openshift-storage
    spec:
      targetNamespaces:
      - openshift-storage
    
    ---
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: lvms
      namespace: openshift-storage
    spec:
      installPlanApproval: Automatic
      name: lvms-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
    EOF

LVM Deployment and Customization

  1. Next, using the information you retrieved in the previous section in Step 1 and 2, you can install the LVM CR instance. This will create our LVM storage environment.

    WARNING: DO NOT simply copy and paste what you see below. Copy, edit for your use case/SNO environment, and then paste. For example, I am using /dev/sdb (single disk). If you need or want to use a different disk, take your time and edit the following information accordingly. If you need to add another disk, you can add another option under the line with /dev/sdb.

    cat <<EOF | oc apply -f -
    apiVersion: lvm.topolvm.io/v1alpha1
    kind: LVMCluster
    metadata:
      name: lvmcluster
      namespace: openshift-storage
    spec:
      storage:
        deviceClasses:
          - name: vg1
            deviceSelector:
              paths:
                - /dev/disk/by-path/pci-0000:1a:00.0-scsi-0:2:1:0
            fstype: xfs
            thinPoolConfig:
              chunkSizeCalculationPolicy: Static
              name: thin-pool-1
              overprovisionRatio: 10
              sizePercent: 90
    EOF

    NOTE: The manifest above will create a StorageClass called lvms-vg1. vg1 is taken from the field above spec.deviceClasses.[name]. LVM deployments prepend StorageClass objects with lvms, so this gives us a resulting name of lvms-vg1.

Creating a Second StorageClass

  1. Now we're going to create another StorageClass, but in this case we're going to create a duplicate of lvms-vg1 which allows for immediate binding of PersistentVolumeClaims (or claims that are bound, regardless of a requesting workload or corresponding PersistentVolume. This is useful in some scenarios, but it won't hurt to use this by default. Because of this, I will make this StorageClass default (which is going to be useful when deploying OpenShift Virtualization).

    cat <<EOF | oc apply -f -
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: lvms-vg1-immediate
      annotations:
        description: Provides RWO and RWOP Filesystem & Block volumes
        storageclass.kubernetes.io/is-default-class: 'true'
        storageclass.kubevirt.io/is-default-virt-class: 'true'
    provisioner: topolvm.io
    parameters:
      csi.storage.k8s.io/fstype: xfs
      topolvm.io/device-class: vg1
    reclaimPolicy: Delete
    allowVolumeExpansion: true
    volumeBindingMode: Immediate
    EOF

Create a StorageProfile for CNV

STOP: DO NOT CONTINUE UNLESS YOU HAVE ALREADY FIRST INSTALLED OPENSHIFT VIRTUALIZATION

The following is ONLY applicable AFTER Installing OpenShift Virtualization on SNO. A StorageProfile modification is a required after installing OpenShift Virtualization if you want to bring VMs up quickly (i.e. if you notice new VMs lag when starting, it's because you haven't created the correct StorageProfile and need to follow the instructions below). A StorageProfile is a CR which is included as part of OpenShift Virtualization/KubeVirt. You can read more about StorageProfile HERE.

  1. With the lvms-vg1-immediate StorageClass deployed, you will need to create a corresponding StorageProfile. Deploy the following StorageProfile for the lvms-vg1-immediate StorageClass. (further documentation can be found HERE.

    cat <<EOF | oc apply -f -
    ---
    apiVersion: cdi.kubevirt.io/v1beta1
    kind: StorageProfile
    metadata:
      name: lvms-vg1-immediate
      labels:
        app: containerized-data-importer
        app.kubernetes.io/component: storage
        app.kubernetes.io/managed-by: cdi-controller
        app.kubernetes.io/part-of: hyperconverged-cluster
        cdi.kubevirt.io: ''
    spec:
      cloneStrategy: snapshot
    EOF

Appendix A: Example LVMCluster Deployment Using Labels

Below is a sample LVMCluster that I use in my lab (just to show you a full example).

oc apply -f - <<EOF
apiVersion: lvm.topolvm.io/v1alpha1
kind: LVMCluster
metadata:
  name: lvmcluster
  namespace: openshift-storage
spec:
  storage:
    deviceClasses:
      - deviceSelector:
          paths:
            - '/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:1:0'
        fstype: xfs
        name: vg1
        nodeSelector:
          nodeSelectorTerms:
            - matchExpressions:
                - key: topology.kubernetes.io/lvm-disk
                  operator: In
                  values:
                    - sdb
        thinPoolConfig:
          chunkSizeCalculationPolicy: Static
          name: thin-pool-1
          overprovisionRatio: 10
          sizePercent: 90
EOF

IMPORTANT: For this to work correctly you MUST use the following label!

NODE_NAME=$(oc get no -o name)

oc label $NODE_NAME topology.kubernetes.io/lvm-disk=sdb

Appendix B: Useful Disk Tooling

If you want to explore your disk speed, and other information listed above (paths, devices, etc), you can use the following script that I've created (for SNO environments): HERE

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment