Last active
September 17, 2018 08:02
-
-
Save CalvinHartwell/855f825952ed13d2fc88eaa1cd36d76a to your computer and use it in GitHub Desktop.
ceph-install-steps.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
First in kubernetes, check that we have the secret setup for Ceph: | |
calvin@calvinh-ws:~/Source/canonical-kubernetes-demos/cdk-ceph$ kubectl get secrets | |
NAME TYPE DATA AGE | |
ceph-secret kubernetes.io/rbd 1 14m | |
If this secret does not exist, I can give the steps to create it. Note that this secret needs to be created for every namespace in the cluster you want to use with the Ceph storage. It contains the API key which is used by k8s to interact with Ceph. | |
It is explained here, however: https://github.com/CanonicalLtd/canonical-kubernetes-demos/tree/master/cdk-ceph | |
If the disks are attached to the machines, we can first check they are present on the kubernetes worker nodes using fdisk. In the example here, you will see I have /dev/sdc which has 10 gig, right now it has no file system and is unmounted: | |
ubuntu@machine-6:~$ sudo fdisk -l | |
Disk /dev/loop0: 87.9 MiB, 92164096 bytes, 180008 sectors | |
Units: sectors of 1 * 512 = 512 bytes | |
Sector size (logical/physical): 512 bytes / 512 bytes | |
I/O size (minimum/optimal): 512 bytes / 512 bytes | |
Disk /dev/loop1: 10.6 MiB, 11141120 bytes, 21760 sectors | |
Units: sectors of 1 * 512 = 512 bytes | |
Sector size (logical/physical): 512 bytes / 512 bytes | |
I/O size (minimum/optimal): 512 bytes / 512 bytes | |
Disk /dev/loop2: 25.7 MiB, 26894336 bytes, 52528 sectors | |
Units: sectors of 1 * 512 = 512 bytes | |
Sector size (logical/physical): 512 bytes / 512 bytes | |
I/O size (minimum/optimal): 512 bytes / 512 bytes | |
Disk /dev/loop3: 10.1 MiB, 10567680 bytes, 20640 sectors | |
Units: sectors of 1 * 512 = 512 bytes | |
Sector size (logical/physical): 512 bytes / 512 bytes | |
I/O size (minimum/optimal): 512 bytes / 512 bytes | |
Disk /dev/sda: 32 GiB, 34359738368 bytes, 67108864 sectors | |
Units: sectors of 1 * 512 = 512 bytes | |
Sector size (logical/physical): 512 bytes / 512 bytes | |
I/O size (minimum/optimal): 512 bytes / 512 bytes | |
Disklabel type: dos | |
Disk identifier: 0xdcd70aad | |
Device Boot Start End Sectors Size Id Type | |
/dev/sda1 * 2048 67108830 67106783 32G 83 Linux | |
Disk /dev/sdb: 200 GiB, 214748364800 bytes, 419430400 sectors | |
Units: sectors of 1 * 512 = 512 bytes | |
Sector size (logical/physical): 512 bytes / 4096 bytes | |
I/O size (minimum/optimal): 4096 bytes / 4096 bytes | |
Disklabel type: dos | |
Disk identifier: 0xba3a09cc | |
Device Boot Start End Sectors Size Id Type | |
/dev/sdb1 2048 419428351 419426304 200G 7 HPFS/NTFS/exFAT | |
Disk /dev/sdc: 10 GiB, 10737418240 bytes, 20971520 sectors | |
Units: sectors of 1 * 512 = 512 bytes | |
Sector size (logical/physical): 512 bytes / 512 bytes | |
I/O size (minimum/optimal): 512 bytes / 512 bytes | |
Next, we prepare the disk using ceph-disk prepare: | |
ubuntu@machine-6:~$ sudo ceph-disk prepare /dev/sdc | |
Creating new GPT entries. | |
Setting name! | |
partNum is 1 | |
REALLY setting name! | |
The operation has completed successfully. | |
Setting name! | |
partNum is 0 | |
REALLY setting name! | |
The operation has completed successfully. | |
meta-data=/dev/sdc1 isize=2048 agcount=4, agsize=327615 blks | |
= sectsz=512 attr=2, projid32bit=1 | |
= crc=1 finobt=1, sparse=0 | |
data = bsize=4096 blocks=1310459, imaxpct=25 | |
= sunit=0 swidth=0 blks | |
naming =version 2 bsize=4096 ascii-ci=0 ftype=1 | |
log =internal log bsize=4096 blocks=2560, version=2 | |
= sectsz=512 sunit=0 blks, lazy-count=1 | |
realtime =none extsz=4096 blocks=0, rtextents=0 | |
Warning: The kernel is still using the old partition table. | |
The new table will be used at the next reboot or after you | |
run partprobe(8) or kpartx(8) | |
The operation has completed successfully. | |
The next step is to enable the disks: | |
ubuntu@machine-6:~$ sudo ceph-disk activate /dev/sdc1 | |
Removed symlink /etc/systemd/system/ceph-osd.target.wants/ceph-osd@0.service. | |
Created symlink from /etc/systemd/system/ceph-osd.target.wants/ceph-osd@0.service to /lib/systemd/system/ceph-osd@.service. | |
Run these commands on each of the kubernetes worker nodes in your cluster. Obviously the /dev/sdc disk may be /dev/sdb instead in your environment. | |
Once you have run this command, you should see the error in Juju disappear (here, I have run the command on ceph-osd/0): | |
Unit Workload Agent Machine Public address Ports Message | |
ceph-mon/0* active idle 6 51.140.44.32 Unit is ready and clustered | |
ceph-mon/1 active idle 7 51.140.33.241 Unit is ready and clustered | |
ceph-mon/2 active idle 8 51.140.38.220 Unit is ready and clustered | |
ceph-osd/0* active idle 6 51.140.44.32 Unit is ready (1 OSD) | |
ceph-osd/1 blocked idle 7 51.140.33.241 No block devices detected using current configuration | |
ceph-osd/2 blocked idle 8 51.140.38.220 No block devices detected using current configuration | |
To use the disks in Kubernetes, we create a storage class: https://raw.githubusercontent.com/CanonicalLtd/canonical-kubernetes-demos/master/cdk-ceph/ceph-storageclass.yaml | |
wget https://raw.githubusercontent.com/CanonicalLtd/canonical-kubernetes-demos/master/cdk-ceph/ceph-storageclass.yaml | |
kubectl apply -f ceph-storageclass.yaml | |
Note that unlike secrets, storageclasses are used cluster wide, but you must have the secret used by the storageclass created in each namespace. | |
We can test this by creating a PVC and checking the PV claim has worked: https://raw.githubusercontent.com/CanonicalLtd/canonical-kubernetes-demos/master/cdk-ceph/cdk-pvc-test.yaml | |
kubectl get pvc | |
kubectl get pv | |
wget https://raw.githubusercontent.com/CanonicalLtd/canonical-kubernetes-demos/master/cdk-ceph/cdk-pvc-test.yaml | |
kubectl apply -f cdk-pvc-test.yaml | |
kubectl get pvc | |
kubectl get pv | |
Finally, you want to remove ceph, you can destroy the storage pool, remove the secret and the storage class and use the file system mounts instead. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment