Skip to content

Instantly share code, notes, and snippets.

@psvmcc
Created May 28, 2019 08:05
Show Gist options
  • Save psvmcc/10b089a576ca096cfdf13de977b97806 to your computer and use it in GitHub Desktop.
Save psvmcc/10b089a576ca096cfdf13de977b97806 to your computer and use it in GitHub Desktop.

Rook on Digital Ocean

image

What Problem are we solving

Storage on Kubernetes.

tl;dr

Rook - Storage Orchestration for Kubernetes.

Rook - Rook is Ceph on Kubernetes.

Agenda

  • Deploy the Rook Operator
  • Create a Rook Cluster
  • Add Block Storage
  • Verify Block Storage Operation
  • Ceph Toolbox

Documentation

Rook : https://github.com/rook/rook

Ceph : https://ceph.com/

Prerequisites

If you selected the $20 Digital Ocean droplets you have:

  • 2 vCPUs
  • 4 GB
  • 80 GB

dataDirHostPath:

  • The path on the host (hostPath) where config and data should be stored for each of the services.
  • If the directory does not exist, it will be created.
  • Because this directory persists on the host, it will remain after pods are deleted.
  • We will be using dataDirHostPath to persist rook data on kubernetes hosts
  • The droplets have at least 5GB of space available on the specified path.

Install Rook

On your master node: kubeadm-001 or ubuntu-s-2vcpu-4gb-sgp1-01

git clone https://github.com/rook/rook.git

cd /root/rook/cluster/examples/kubernetes/ceph

Deploy the Rook Operator

Deploy the Rook Operator: kubectl create -f operator.yaml

Verify Rook Operator

Verify the:

  • rook-ceph-operator
  • rook-ceph-agent
  • rook-discover

pods are in the Running state before proceeding:

watch -n1 kubectl -n rook-ceph-system get pod

Sample Output:

Every 1.0s: kubectl -n rook-ceph-system get pod                                                               Sun Jul 29 23:34:05 2018

NAME                                  READY     STATUS    RESTARTS   AGE
rook-ceph-agent-2vggg                 1/1       Running   0          1m
rook-ceph-agent-7svvr                 1/1       Running   0          1m
rook-ceph-agent-cpfrj                 1/1       Running   0          1m
rook-ceph-operator-78d498c68c-hmttk   1/1       Running   0          1m
rook-discover-gf9tk                   1/1       Running   0          1m
rook-discover-nfc2z                   1/1       Running   0          1m
rook-discover-zlx5h                   1/1       Running   0          1m

Create a Rook Cluster

Create the storage cluster.

vi cluster.yaml

Make the following change to ensure filestore is used and not bluestore:

  storage: # cluster level storage configuration and selection
    useAllNodes: true
    useAllDevices: false
    storeConfig:      
      storeType: filestore 

Create the storage cluster: kubectl create -f cluster.yaml

Verify Rook Cluster

Use kubectl to list pods in the rook namespace.

You should be able to see the following pods once they are all running.

The number of osd pods will depend on the number of nodes in the cluster and the number of devices and directories configured.

watch -n1 kubectl -n rook-ceph get pod

Sample Output:

Every 1.0s: kubectl -n rook-ceph get pod                                                                      Mon Jul 30 00:20:14 2018

NAME                                      READY     STATUS      RESTARTS   AGE
rook-ceph-mgr-a-9c44495df-7p2lz           1/1       Running     0          1m
rook-ceph-mon0-d5j58                      1/1       Running     0          1m
rook-ceph-mon1-7wqrz                      1/1       Running     0          1m
rook-ceph-mon2-jgfjs                      1/1       Running     0          1m
rook-ceph-osd-id-0-55d5844989-nx7qr       1/1       Running     0          1m
rook-ceph-osd-id-1-df46f7c5-qrv67         1/1       Running     0          1m
rook-ceph-osd-id-2-6b84c6dc8f-bzjkx       1/1       Running     0          1m
rook-ceph-osd-prepare-kubeadm-002-fmb4c   0/1       Completed   0          1m
rook-ceph-osd-prepare-kubeadm-003-sfwx2   0/1       Completed   0          1m
rook-ceph-osd-prepare-kubeadm-004-vql69   0/1       Completed   0          1m

Add Block Storage

tl;dr - Block storage is data storage typically used in storage-area network (SAN) environments where:

  • data is stored in volumes
  • also referred to as blocks

Provision Storage

vi storageclass.yaml

  • Make the following edit: size: 3
apiVersion: ceph.rook.io/v1beta1
kind: Pool
metadata:
  name: replicapool
  namespace: rook-ceph
spec:
  replicated:
    size: 3
  # For an erasure-coded pool, comment out the replication size above and uncomment the following settings.
  # Make sure you have enough OSDs to support the replica size or erasure code chunks.
  #erasureCoded:
  #  dataChunks: 2
  #  codingChunks: 1
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: rook-ceph-block
provisioner: ceph.rook.io/block
parameters:
  pool: replicapool
  # Specify the namespace of the rook cluster from which to create volumes.
  # If not specified, it will use `rook` as the default namespace of the cluster.
  # This is also the namespace where the cluster will be
  clusterNamespace: rook-ceph
  # Specify the filesystem type of the volume. If not specified, it will use `ext4`.
  fstype: xfs

Create the storage class: kubectl create -f storageclass.yaml

Sample Output:

root@kubeadm-001:~/rook/cluster/examples/kubernetes/ceph# kubectl create -f storageclass.yaml
pool.ceph.rook.io/replicapool created
storageclass.storage.k8s.io/rook-ceph-block created

Verify Block Storage Operation

cd /root/rook/cluster/examples/kubernetes
kubectl create -f mysql.yaml
kubectl create -f wordpress.yaml

PersistentVolumeClaim Snippet

  • storageClassName: rook-ceph-block
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
  labels:
    app: wordpress
spec:
  storageClassName: rook-ceph-block
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi

kubectl get pvc

root@kubeadm-001:~/rook/cluster/examples/kubernetes# kubectl get pvc
NAME             STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
mysql-pv-claim   Bound     pvc-923176b6-938f-11e8-84db-1ebb138ed330   20Gi       RWO            rook-ceph-block   13s
wp-pv-claim      Bound     pvc-95b4d747-938f-11e8-84db-1ebb138ed330   20Gi       RWO            rook-ceph-block   7s

kubectl edit svc wordpress

type: LoadBalancer 
to
type: NodePort

Get the NodePort port value: kubectl get svc wordpress

Check for WordPress page on NodePort:IP

image

Teardown

kubectl delete -f wordpress.yaml
kubectl delete -f mysql.yaml
kubectl delete -n rook-ceph pool replicapool
kubectl delete storageclass rook-ceph-block

Ceph Toolbox

The Rook toolbox is a container with common tools used for rook debugging and testing.

cd /root/rook/cluster/examples/kubernetes/ceph

kubectl apply -f toolbox.yaml

Verify Installation: kubectl -n rook-ceph get pod rook-ceph-tools

Once the rook-ceph-tools pod is running, you can connect to it with: kubectl -n rook-ceph exec -it rook-ceph-tools bash

Available tools in the toolbox are ready for your troubleshooting needs.

ceph status
ceph osd status
ceph df
rados df

ceph status

[root@rook-ceph-tools /]# ceph status
  cluster:
    id:     7c5589c0-855d-4a34-960c-dd61d1413a0c
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum rook-ceph-mon2,rook-ceph-mon1,rook-ceph-mon0
    mgr: a(active)
    osd: 3 osds: 3 up, 3 in

  data:
    pools:   1 pools, 100 pgs
    objects: 102 objects, 230 MB
    usage:   16037 MB used, 216 GB / 232 GB avail
    pgs:     100 active+clean

Teardown: kubectl -n rook-ceph delete pod rook-ceph-tools

Next Steps

Now add monitoring to your cluster by completing the steps in Monitoring Kubernetes with Prometheus and Grafana via Helm.

End Of Section

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment