minikube start --force --memory="4096" --cpus="2" -b kubeadm --kubernetes-version="v1.19.2" --driver="kvm2" --feature-gates="BlockVolume=true,CSIBlockVolume=true,VolumeSnapshotDataSource=true,ExpandCSIVolumes=true"
minikube ssh "sudo mkdir -p /mnt/vda1/var/lib/rook;sudo ln -s /mnt/vda1/var/lib/rook /var/lib/rook"
We need at least one disk for our standalone Ceph cluster!
sudo -S qemu-img create -f raw /var/lib/libvirt/images/minikube-box-vm-disk-50G 50G
virsh -c qemu:///system attach-disk minikube --source /var/lib/libvirt/images/minikube-box-vm-disk-50G --target vdb --cache none
virsh -c qemu:///system reboot --domain minikube
minikube ssh "sudo lsblk"
minikube start --force --memory="4096" --cpus="2" -b kubeadm --kubernetes-version="v1.19.2" --driver="kvm2" --feature-gates="BlockVolume=true,CSIBlockVolume=true,VolumeSnapshotDataSource=true,ExpandCSIVolumes=true"
git clone https://github.com/rook/rook/
cd deploy/examples/
alias k="kubectl"
alias kc="kubectl create -f"
Patch cluster-test to use hostNetwork: true
as the goal is to expose this cluster
outside and connect OpenStack to the cluster provided by minikube.
Apply the following to the cluster-test.yaml
cluster we're going to create.
diff --git a/deploy/examples/cluster-test.yaml b/deploy/examples/cluster-test.yaml
index cd3a3d111..2254cacd8 100644
--- a/deploy/examples/cluster-test.yaml
+++ b/deploy/examples/cluster-test.yaml
@@ -27,6 +27,8 @@ metadata:
namespace: rook-ceph # namespace:cluster
spec:
dataDirHostPath: /var/lib/rook
+ network:
+ hostNetwork: true
cephVersion:
image: quay.io/ceph/ceph:v16.2.7
allowUnsupported: true
for i in crds common operator cluster-test toolbox; { kubectl create -f $i.yaml; }
And wait for the cluster:
`$ k get pods -n rook-ceph -w`
> k get pods -n rook-ceph
NAME READY STATUS RESTARTS AGE
csi-cephfsplugin-cmnzc 3/3 Running 0 4m1s
csi-cephfsplugin-provisioner-6d4bd9b669-jkwkb 6/6 Running 0 4m1s
csi-rbdplugin-provisioner-6bcd78bb89-hbn6d 6/6 Running 0 4m1s
csi-rbdplugin-s85sj 3/3 Running 0 4m1s
rook-ceph-mgr-a-595dc9f57f-2rg89 1/1 Running 0 3m12s
rook-ceph-mon-a-6b4c8b75df-s9qxr 1/1 Running 0 3m28s
rook-ceph-operator-7bf8ff479-fn2x8 1/1 Running 0 7m23s
rook-ceph-osd-0-67df5fd4d7-s6brg 1/1 Running 0 106s
rook-ceph-osd-prepare-minikube-lcm88 0/1 Completed 0 113s
rook-ceph-tools-5c6844fcd5-bb75h 1/1 Running 0 9s
> kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash
[rook@rook-ceph-tools-5c6844fcd5-bb75h /]$ ceph -s
cluster:
id: 59998731-37b2-4a5d-a6c6-7d5e15f16408
health: HEALTH_OK
services:
mon: 1 daemons, quorum a (age 4m)
mgr: a(active, since 2m)
osd: 1 osds: 1 up (since 2m), 1 in (since 2m)
data:
pools: 1 pools, 32 pgs
objects: 0 objects, 0 B
usage: 5.0 MiB used, 50 GiB / 50 GiB avail
pgs: 32 active+clean
[rook@rook-ceph-tools-5c6844fcd5-bb75h /]$ cat /etc/ceph/ceph.conf
[global]
mon_host = 192.168.39.71:6789
[client.admin]
keyring = /etc/ceph/keyring
[rook@rook-ceph-tools-5c6844fcd5-bb75h /]$ ceph osd dump
epoch 12
fsid 59998731-37b2-4a5d-a6c6-7d5e15f16408
created 2022-04-22T08:00:09.332006+0000
modified 2022-04-22T08:02:01.929903+0000
flags sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit
crush_version 4
full_ratio 0.95
backfillfull_ratio 0.9
nearfull_ratio 0.85
require_min_compat_client luminous
min_compat_client jewel
require_osd_release pacific
stretch_mode_enabled false
pool 1 'device_health_metrics' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 10 flags hashpspool stripe_width 0 application mgr_devicehealth
max_osd 1
osd.0 up in weight 1 up_from 8 up_thru 8 down_at 0 last_clean_interval [0,0) [v2:192.168.39.71:6802/4122858893,v1:192.168.39.71:6803/4122858893] [v2:192.168.39.71:6804/4122858893,v1:192.168.39.71:6805/4122858893] exists,up 37bdd9a3-27b8-4a78-9709-c4cef983fa98
blocklist 192.168.39.71:0/972107068 expires 2022-04-23T08:01:56.083964+0000
blocklist 192.168.39.71:6801/1133687069 expires 2022-04-23T08:01:56.083964+0000
blocklist 192.168.39.71:6800/1133687069 expires 2022-04-23T08:01:56.083964+0000
blocklist 192.168.39.71:6800/586103330 expires 2022-04-23T08:01:44.618135+0000
blocklist 192.168.39.71:6801/586103330 expires 2022-04-23T08:01:44.618135+0000
blocklist 192.168.39.71:0/2482452871 expires 2022-04-23T08:01:44.618135+0000
blocklist 192.168.39.71:0/2235038863 expires 2022-04-23T08:01:56.083964+0000
blocklist 192.168.39.71:0/3904768403 expires 2022-04-23T08:01:44.618135+0000
> minikube ssh "sudo ip -o -4 a"
1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever
2: eth0 inet 192.168.122.199/24 brd 192.168.122.255 scope global dynamic eth0\ valid_lft 2510sec preferred_lft 2510sec
3: eth1 inet 192.168.39.71/24 brd 192.168.39.255 scope global dynamic eth1\ valid_lft 2510sec preferred_lft 2510sec
5: docker0 inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0\ valid_lft forever preferred_lft forever
We can see the osd map as well as the ceph monitors exposed to the 192.168.39.0/24 network, which is the minikube network:
> s virsh net-list
Name State Autostart Persistent
-------------------------------------------------
minikube-net active yes yes
[INS] (0) ~/labs/cube/rook/deploy/examples(v1.8.7)[*|C:8289]
> s virsh net-dumpxml minikube-net
<network connections='1'>
<name>minikube-net</name>
<uuid>2f1106fa-9619-44b1-ba26-5b24721da4a9</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:ff:48:28'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.254'/>
</dhcp>
</ip>
</network>
- create pools
for pool in volumes images vms; {
ceph osd pool create $pool;
ceph osd pool application enable $pool rbd;
}
- create keys
ceph auth add client.openstack mgr 'allow *' mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'
ceph auth ls
client.openstack
key: AQD1ZmJiBVOvMBAAJkaQo567m2sI9uZwVS5tXg==
caps: [mgr] allow *
caps: [mon] profile rbd
caps: [osd] profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images
Let's follow the standalone tripleo guide to start preparing an environment that should be connected to the ceph cluster.
Attach the bridge to the minikube-net
to be able to reach the k8s cluster (especially Ceph):
br=$(sudo virsh net-dumpxml minikube-net | awk '/virbr/ {print $2}' | cut -d= -f2)
sudo virsh attach-interface --domain standalonec9_default --type bridge --source $br --model virtio --config --live
and try to reach the Ceph cluster:
telnet 192.168.39.71 6789
Trying 192.168.39.71...
Connected to 192.168.39.71.
Escape character is '^]'.
ceph v027'G'
telnet> quit
Connection closed.
After preparing the environment as per the procedure above, edit the /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml environment file adding the Ceph info of the step before.
[stack@standalone ~]$ cat /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml | grep -vE "#|^$"
resource_registry:
OS::TripleO::Services::CephExternal: ../deployment/cephadm/ceph-client.yaml
parameter_defaults:
CephClusterFSID: '59998731-37b2-4a5d-a6c6-7d5e15f16408'
CephClientKey: 'AQD1ZmJiBVOvMBAAJkaQo567m2sI9uZwVS5tXg=='
CephExternalMonHost: '192.168.39.71'
NovaEnableRbdBackend: true
CinderEnableRbdBackend: true
CinderBackupBackend: ceph
GlanceBackend: rbd
NovaRbdPoolName: vms
CinderRbdPoolName: volumes
CinderBackupRbdPoolName: backups
GlanceRbdPoolName: images
CephClientUserName: openstack
CinderEnableIscsiBackend: false
and we're ready to deploy!
export IP=192.168.24.2
export NETMASK=24
export INTERFACE=eth1
sudo openstack tripleo deploy \
--templates \
--local-ip=192.168.24.1/24 \
-e /usr/share/openstack-tripleo-heat-templates/environments/standalone/standalone-tripleo.yaml \
-r /usr/share/openstack-tripleo-heat-templates/roles/Standalone.yaml \
-e "$HOME/containers-prepare-parameters.yaml" \
-e "$HOME/standalone_parameters.yaml" \
-e /usr/share/openstack-tripleo-heat-templates/environments/low-memory-usage.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml \
--output-dir $HOME \
export OS_CLOUD=standalone
openstack endpoint list
openstack volume create test --size 1