Skip to content

Instantly share code, notes, and snippets.

@fmount
Last active January 4, 2023 11:03
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save fmount/24b9fb2cfaff9dc813b8211414cb4de0 to your computer and use it in GitHub Desktop.
Save fmount/24b9fb2cfaff9dc813b8211414cb4de0 to your computer and use it in GitHub Desktop.
Notes: Rook / Multus on crc

Bootstrap crc and setup oc tools

wget https://mirror.openshift.com/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz
wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-client-linux.tar.gz
sudo tar -xJvf crc-linux-amd64.tar.xz
sudo mv crc-linux-1.22.0-amd64/crc /usr/local/bin/
sudo tar zxf openshift-client-linux.tar.gz -C /usr/local/bin/

Setup crc and add a new Disk for Rook purposes

crc config set skip-check-daemon-systemd-sockets true
crc config set skip-check-daemon-systemd-unit true
crc config set enable-cluster-monitoring false
crc config set consent-telemetry no
# Pull the secret from: https://console.redhat.com/openshift/create/local
# and set it to the crc instance config
crc config set pull-secret-file ~/.pull-secret.txt
crc config view
crc setup
crc start

[OPTIONAL]: SSH Access to the CRC instance

ssh -i ~/.crc/machines/crc/id_ecdsa core@"$(crc ip)"

[OPTIONAL]: Adding more resources to CRC/OCP

  • Allocate more vCPU: crc config set cpus 6
  • Allocate additional memory: crc config set memory 16384

ROOK disks

Thin provisioned disks for ODF/ROOK

sudo -S qemu-img create -f raw ~/.crc/vdb 100G
sudo -S qemu-img create -f raw ~/.crc/vdc 100G

Attach these devices to CRC VM

crc stop
sudo virsh list --all
sudo virsh dumpxml crc > ~/crc.xml
vim ~/crc.xml

Add the following section to crc.xml Make sure to set the correct disk path

<disk type='file' device='disk'>
  <driver name='qemu' type='raw' cache='none'/>
  <source file='~/.crc/vdb' index='1'/>
  <backingStore/>
  <target dev='vdb' bus='virtio'/>
  <alias name='virtio-disk1'/>
  <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
</disk>
<disk type='file' device='disk'>
  <driver name='qemu' type='raw' cache='none'/>
  <source file='~/.crc/vdc' index='2'/>
  <backingStore/>
  <target dev='vdc' bus='virtio'/>
  <alias name='virtio-disk2'/>
  <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
</disk>

Apply XML file and start CRC

sudo virsh define ~/crc.xml
crc start

List devices to verify

ssh -i ~/.crc/machines/crc/id_ecdsa core@"$(crc ip)" lsblk

[OPTIONAL] - Multus Network config

> cat libvirt-ocs-cluster.xml
<network>
   <name>ocs-cluster</name>
   <uuid>6fa7adf3-24e6-48a7-92d9-fab07d3d31bb</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr29' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6e'/>
   <ip address='172.16.143.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='172.16.143.2' end='172.16.143.254' />
     </dhcp>
   </ip>
</network>
> cat libvirt-ocs-public.xml
<network>
   <name>ocs-public</name>
   <uuid>4760767a-a73a-423f-9324-26c7c9428692</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr28' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6e'/>
   <ip address='172.16.142.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='172.16.142.2' end='172.16.142.254' />
     </dhcp>
   </ip>
</network>

And create the two networks:

virsh net-create --validate --file libvirt-ocs-cluster.xml
virsh net-create --validate --file libvirt-ocs-public.xml

Shutdown crc with crc stop and, before restarting it, attach the two created networks to the instance:

sudo virsh attach-interface --domain crc --type bridge --source virbr28 --model virtio --config
sudo virsh attach-interface --domain crc --type bridge --source virbr29 --model virtio --config

Note: You can avoid the steps above if crc is redeployed by dumping the resulting crc.xml and redefining the libvirt instance:

crc stop
sudo virsh list --all
sudo virsh dumpxml crc > ~/crc.xml
crc start

[OPTIONAL]: Check the two defined network

ssh -i ~/.crc/machines/crc/id_ecdsa core@"$(crc ip)" ip -o -4 a

Build the Ceph cluster and attach Multus interfaces to Pods

When crc is started, you can run the followin to deploy the Ceph resources and the Rook operator:

oc apply -f https://raw.githubusercontent.com/rook/rook/master/deploy/examples/crds.yaml
oc apply -f https://raw.githubusercontent.com/rook/rook/master/deploy/examples/common.yaml
oc apply -f https://raw.githubusercontent.com/rook/rook/master/deploy/examples/operator-openshift.yaml

Before deploying the Ceph cluster, create the two network-attachment-definition (NAD):

> cat ocs-cluster.yaml
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: ocs-cluster
  namespace: rook-ceph
spec:
  config: |-
    {
       "cniVersion": "0.3.1",
       "name": "ocs-cluster",
       "type": "macvlan",
       "master": "enp6s0",
       "mode": "bridge",
       "ipam": {
         "type": "whereabouts",
         "range": "172.16.143.110/24",
         "exclude": [
           "172.16.143.1/32",
           "172.16.143.255/32",
           "172.16.143.242/32"
         ]
       }
    }
> cat ocs-public.yaml
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: ocs-public
  namespace: rook-ceph
spec:
  config: |-
    {
       "cniVersion": "0.3.1",
       "name": "ocs-public",
       "type": "macvlan",
       "master": "enp7s0",
       "mode": "bridge",
       "ipam": {
         "type": "whereabouts",
         "range": "172.16.142.110/24",
         "exclude": [
           "172.16.142.1/32",
           "172.16.142.255/32",
           "172.16.143.242/32"
         ]
       }
    }

and apply them:

for net in cluster public; { oc create -f ocs-$net.yaml; }

The two NADs are created within the rook-ceph namespace, hence we need to edit the cluster definition to properly set the selector:

diff --git a/deploy/examples/cluster-test.yaml b/deploy/examples/cluster-test.yaml
index d8b5b94f3..ca8113c6d 100644
--- a/deploy/examples/cluster-test.yaml
+++ b/deploy/examples/cluster-test.yaml
@@ -27,6 +27,11 @@ metadata:
   namespace: rook-ceph # namespace:cluster
 spec:
   dataDirHostPath: /var/lib/rook
+  network:
+    provider: multus
+    selectors:
+      cluster: rook-ceph/ocs-cluster
+      public: rook-ceph/ocs-public
   cephVersion:
     image: quay.io/ceph/ceph:v17
     allowUnsupported: true

This process is described here.

You can finally apply the Ceph cluster:

oc apply -f cluster-test.yaml
oc apply -f https://raw.githubusercontent.com/rook/rook/master/deploy/examples/toolbox.yaml
> oc get pods
NAME                                             READY   STATUS             RESTARTS      AGE
csi-cephfsplugin-bbrhj                           3/3     Running            0             83s
csi-cephfsplugin-holder-my-cluster-wdndz         1/1     Running            0             83s
csi-rbdplugin-7kmc9                              3/3     Running            0             83s
csi-rbdplugin-holder-my-cluster-4cfhl            1/1     Running            0             83s
rook-ceph-mgr-a-77f4f97678-5qmxq                 2/2     Running            0             89s
rook-ceph-mon-a-9f76f4fcc-wtmff                  1/1     Running            0             2m1s
rook-ceph-osd-0-584ccfc684-v554n                 1/1     Running            0             47s
rook-ceph-osd-1-57d9fbcd96-64pnv                 1/1     Running            0             47s

and access it via the toolbox pod.

Some results

oc describe pod rook-ceph-mon-a-9f76f4fcc-wtmff shows the ocs-public network:

Normal  AddedInterface  52s   multus             Add eth0 [10.217.0.52/23] from openshift-sdn
Normal  AddedInterface  52s   multus             Add net1 [172.16.142.3/24] from rook-ceph/ocs-public
Normal  Pulled          52s   kubelet            Container image "quay.io/ceph/ceph:v17" already present on machine

and both ocs public and cluster networks are attached to the OSDs (e.g. osd0):

Normal  AddedInterface  85s   multus             Add eth0 [10.217.0.63/23] from openshift-sdn
Normal  AddedInterface  84s   multus             Add net1 [172.16.143.3/24] from rook-ceph/ocs-cluster
Normal  AddedInterface  84s   multus             Add net2 [172.16.142.11/24] from rook-ceph/ocs-public

The toolbox pod shows also the two networks are configured as public and cluster within the Ceph cluster:

[rook@rook-ceph-tools-f87879d85-gt7m7 /]$ ceph config dump
WHO     MASK  LEVEL     OPTION                                 VALUE              RO
global        advanced  cluster_network                        172.16.143.110/24  *
global        basic     log_to_file                            false
global        advanced  mon_allow_pool_delete                  true
global        advanced  mon_allow_pool_size_one                true
global        advanced  mon_cluster_log_file
global        advanced  public_network                         172.16.142.110/24  *
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment