Skip to content

Instantly share code, notes, and snippets.

@timhughes
Last active September 6, 2021 12:50
Show Gist options
  • Save timhughes/7ea07171f7b0121a5d6345e40572006d to your computer and use it in GitHub Desktop.
Save timhughes/7ea07171f7b0121a5d6345e40572006d to your computer and use it in GitHub Desktop.

Minikube

Prerequisites

Make sure the user has access to libvirt:

sudo usermod -a -G libvirt $USER

Install minikube:

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
install minikube-linux-amd64 ~/.local/bin/minikube
rm minikube-linux-amd64
minikube version

Set the minikube driver

minikube config set driver kvm2

Clean up any old install:

minikube delete

Start a new single node minikube:

minikube start --cpus=6 --memory=8G --cni=calico --addons=metallb --addons=istio-provisioner --addons=istio

Or start a cluster with 1 controlplane node and 3 worker nodes. This currently has issues. see rook/rook#4238

minikube start --nodes 4 --cpus=6 --memory=8G --cni=calico --addons=metallb --addons=istio-provisioner --addons=istio

Check that it is working:

$ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured



minikube kubectl get nodes

If some pods dont start correctly, try killing them and letting them rebuild

Loadbalancer

Get minikube ipaddress and pick a range for metallb

]$ minikube ip
192.168.39.166

]$ minikube addons configure metallb
-- Enter Load Balancer Start IP: 192.168.39.200
-- Enter Load Balancer End IP: 192.168.39.220
✅  metallb was successfully configured

Check that the addresses are set.

hughest@argon [0] $  kubectl describe configmap config -n metallb-system
Name:         config
Namespace:    metallb-system
Labels:       <none>
Annotations:  <none>

Data
====
config:
----
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.39.200 - 192.168.39.220

Events:  <none>

If for fome reason the ip range doesn't get set you will need to edit it instead and add in the range.

kubectl edit configmap config -n metallb-system

Add in the ip range so that the yaml looks like this:

apiVersion: v1
data:
config: |
    address-pools:
    - name: default
    protocol: layer2
    addresses:
    - 192.168.39.200-192.168.39.220

Istio ingress

Enable the addons

minikube addons enable istio-provisioner
minikube addons enable istio

Have a look at the bookinfo example https://istio.io/latest/docs/examples/bookinfo/

Rook Ceph

Check you can see the VMs with virsh , if not you may need to access the system uri for libvirt

virsh list --all 
export LIBVIRT_DEFAULT_URI=qemu:///system

This all seems to fall in a heap after a restart with a simgle node cluster.

Single Node: Add a extra disk for rook-ceph

sudo virsh vol-create-as --pool default --name minikube-rook.raw --format raw --capacity 40G --allocation 10M
sudo virsh attach-disk --domain minikube --source $(virsh vol-list --pool default | grep minikube-rook.raw|awk '{print $2}') --target vdb --cache none --persistent

Multi Node: Add a extra disk for rook-ceph to each node

kubectl get nodes -o=json |jq -r '.items[] | select(.metadata.name | test("minikube-")).metadata.name' | while read nodename
do 
    virsh vol-create-as --pool default --name ${nodename}-rook.raw --format raw --capacity 20G --allocation 10M
    virsh attach-disk --domain ${nodename} --source $(virsh vol-list --pool default | grep ${nodename}-rook.raw|awk '{print $2}') --target vdb --cache none --persistent
done

Start and stop the vm so that it sees the disk

minikube stop
minikube start

Clone the rook repoa dn checkout the latest release

git clone https://github.com/rook/rook
cd rook
git checkout release-1.6

Apply the manifests

kubectl apply -f cluster/examples/kubernetes/ceph/crds.yaml
kubectl apply -f cluster/examples/kubernetes/ceph/common.yaml
kubectl apply -f cluster/examples/kubernetes/ceph/operator.yaml

Single node cluster

kubectl apply -f cluster/examples/kubernetes/ceph/cluster-test.yaml

Multi node cluster

kubectl apply -f cluster/examples/kubernetes/ceph/cluster.yaml

Watch the logs

kubectl logs -n rook-ceph -l app=rook-ceph-operator  -f

Access the dashboard

  1. Get the admin password

    kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode && echo
    
  2. Create a port-forward

    kubectl --namespace rook-ceph port-forward service/rook-ceph-mgr-dashboard 7000:7000
    
  3. Access the UI in a browser. Username is admin

Create an object store (s3)

kubectl apply -f cluster/examples/kubernetes/ceph/object-test.yaml
kubectl apply -f cluster/examples/kubernetes/ceph/storageclass-bucket-delete.yaml
kubectl apply -f cluster/examples/kubernetes/ceph/object-bucket-claim-delete.yaml
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment