Skip to content

Instantly share code, notes, and snippets.

@scollier
Last active March 13, 2018 20:56
Show Gist options
  • Save scollier/d7d0962833901c1cc2ff2f95f5ea2cd4 to your computer and use it in GitHub Desktop.
Save scollier/d7d0962833901c1cc2ff2f95f5ea2cd4 to your computer and use it in GitHub Desktop.
cinder-k8s-deploy

This demo describes how to attach cinder running on top of k8s to an existing external ceph cluster. At the end, you will have a Cinder deployment without authorization enabled.

deploy kubernetes here.

deploy kubevirt here.

Pre-requisites:

You must have name resolution to the node that will be running the mariadb server.

Create the ceph client configuration on the ceph servers, not the kubernetes server.

 ceph osd pool create cinder_volumes 128
 ceph auth add client.cinder -i /tmp/ceph.client.cinder.keyring
 ceph auth list

Copy over the output ceph.client.cinder.keyring and ceph.conf file to the master kubernetes node.

On the master node install gcc, python-devel, and pip the cinder client.

yum -y install gcc python-devel ceph-common
chmod +x get-pip.py
./get-pip.py
pip list
pip install python-cinderclient --user
export PATH=$PATH:$HOME/.local/bin/
cinder --version

Get started:

On a node with kubectl, label the master node to host the mariadb pod. You can pick any node, we are arbitrily picking the master.

kubectl label node smc-master.cloud.lab.eng.bos.redhat.com mariadb=true
kubectl get nodes --show-labels  # looking for the label, mariadb=true

Create a namespace for the openstack cinder stuff

kubectl get ns
kubectl create namespace openstack
kubectl get ns

Set openstack as your default namespace

kubectl config view
kubectl config set-context $(kubectl config current-context) --namespace=openstack
kubectl config view

Create service accounts and secret.

kubectl create sa cinder-privileged -n openstack
kubectl create sa cinder-anyuid -n openstack
kubectl get sa -n openstack

kubectl create secret generic cinder-secrets \
--from-literal=mariadb-root-password=weakpassword  \
--from-literal=cinder-password=cinderpassword \
--from-literal=rabbitmq-password=rabbitmqpassword

kubectl get secret

Need the contents of the ceph.client.cinder.keyring and the ceph.conf in root dir of kubernetes master.

Create secrets using the ceph.conf and cinder keyring

kubectl create secret generic ceph-secrets --from-file=ceph.conf --from-file=ceph.client.cinder.keyring -n default
kubectl describe secret ceph-secrets

Apply the cinder / ceph configuration

kubectl apply -f cinder-ceph.yml

kubectl get all

At this point,jobs are kicking off and cinder is being bootstrapped. they set up the environment for the pods to function, api, scheduleer and volumes

k exec -it po/cinder-volume.... /bin/bash
export TERM=linux
cat /var/log/cinder/cinder-volume.log
cinder-manage service list
exit

install the cinder client

wget https://bootstrap.pypa.io/get-pip.py
python get-pip.py

yum install gcc python-devel
pip install cinder-python-cinderclient --user
~/.local/bin/cinder --version
export PATH=$PATH:$HOME/.local/bin
which cinder
kubectl get ing -n openstack

Once the ingress is working, you can list / delete / whatever. Create an environment file.

deploy ingress with RBAC

export CINDERCLIENT_BYPASS_URL=http://10.244.1.4:8776/v3
export OS_VOLUME_API_VERSION=3.10
export OS_AUTH_SYSTEM=noauth
export OS_PROJECT_ID=admin

Now you can list or perform other operations.

cinder list
cinder delete <UID HERE>

Cleanup

kubectl delete rs,deploy,po,svc,jobs,statefulsets --all

once you can list manually, at this point we move on to the next phase which leverages dynamic provisioning.

Get the secrets: kubectl get all,secrets,configmap -n openstack

Now provision a VM

Configure the standalone cinder provisioner.

provision in the default namespace.

deploying the VMs is a 4 step process:

  • create the PVC using the storageclass name. This will talk to cinder, request a volume, cinder will create. It’s basically dynamic storage provisioning. It’s not using the default cinder storage class, this is leveraging the extra pod which acts as a proxy.

kubectl apply -f ...
kubectl get pvc
kubectl get pv
kubectl get pv -o yaml
  • provision the importer pod. The purpose of this is to launch the golden PVs. These PVs will just hold the operating system of the images. e.g. centos, cirros, fedora. This operation only needs to occur once per operating system. VM’s will be provisioned off of these golden PVs.

kubectl apply -f ...
  • Create the PVC. This step creates a PVC and clones it from one of the pre-determined golden PVs

kubectl apply -f ...
  • deploy the VM.

kubectl apply -f ...
kubectl get vm

If you have any issues, check the logs of the standalone-cinder-provisioner pod.

Connect to the VM

  • ssh

  • rdp

  • console

Ingress configuration. nginx ingress on bare metal.

Create the ingress controller using the kubernetes ingress config, not the one from nginx inc.

curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/namespace.yaml     | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/default-backend.yaml     | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/configmap.yaml     | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/tcp-services-configmap.yaml     | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/udp-services-configmap.yaml     | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/rbac.yaml     | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/with-rbac.yaml     | kubectl apply -f -
kubectl get pods --all-namespaces -l app=ingress-nginx --watch
POD_NAMESPACE=ingress-nginx
POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app=ingress-nginx -o jsonpath={.items[0].metadata.name})
kubectl exec -it $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version
kubectl get all -n ingress-nginx
kubectl get pods

Test the configuration with a simple nginx app. Deploy the app.

kubectl run nginx --image=nginx --port=80
kubectl get pods

Expose the app via a service. The option of NodePort will expose a random port on every node that resolves to the endpoint of the nginx application running in the pod.

kubectl expose deployment nginx --target-port=80 --type=NodePort
kubectl get svc
kubectl describe svc nginx

Confirm that the NodePort is available on each host.

netstat -tulnp

Deploy the nginx ingress object.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-ingress
spec:
  rules:
  - host: smc-work1.cloud.lab.eng.bos.redhat.com
    http:
      paths:
      - path: /
        backend:
          serviceName: nginx
          servicePort: 31809
kubectl apply -f nginx-ingress.yaml
kubectl get ing
kubectl describe ing nginx-ingress

Try to hit the app from outside the cluster. The creation of the service with a nodePort exposed the port on every host. So either one of two things can happen here. If you are only running one instance of the nginx application (per the default above), you can provide the DNS hostname of the node running the nginx app pod, and it will work. For example, if you get pods -o wide and you see it on smc-work1, you can curl that and get a response. smc-work2 will fail.

kubectl get pods -o wide
curl http://smc-work1.cloud.lab.eng.bos.redhat.com:31809

You can also scale the nginx app so there’s a pod on every application node in the cluster, and it will work.

kubectl scale deployment nginx --replicas=2
kubectl get pods -o wide
curl http://smc-work1.cloud.lab.eng.bos.redhat.com:31809
curl http://smc-work2.cloud.lab.eng.bos.redhat.com:31809
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment