Make sure the user has access to libvirt:
sudo usermod -a -G libvirt $USER
Install minikube:
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
install minikube-linux-amd64 ~/.local/bin/minikube
rm minikube-linux-amd64
minikube version
Set the minikube driver
minikube config set driver kvm2
Clean up any old install:
minikube delete
Start a new single node minikube:
minikube start --cpus=6 --memory=8G --cni=calico --addons=metallb --addons=istio-provisioner --addons=istio
Or start a cluster with 1 controlplane node and 3 worker nodes. This currently has issues. see rook/rook#4238
minikube start --nodes 4 --cpus=6 --memory=8G --cni=calico --addons=metallb --addons=istio-provisioner --addons=istio
Check that it is working:
$ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
minikube kubectl get nodes
If some pods dont start correctly, try killing them and letting them rebuild
Get minikube ipaddress and pick a range for metallb
]$ minikube ip
192.168.39.166
]$ minikube addons configure metallb
-- Enter Load Balancer Start IP: 192.168.39.200
-- Enter Load Balancer End IP: 192.168.39.220
✅ metallb was successfully configured
Check that the addresses
are set.
hughest@argon [0] $ kubectl describe configmap config -n metallb-system
Name: config
Namespace: metallb-system
Labels: <none>
Annotations: <none>
Data
====
config:
----
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.39.200 - 192.168.39.220
Events: <none>
If for fome reason the ip range doesn't get set you will need to edit it instead and add in the range.
kubectl edit configmap config -n metallb-system
Add in the ip range so that the yaml looks like this:
apiVersion: v1
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.39.200-192.168.39.220
Enable the addons
minikube addons enable istio-provisioner
minikube addons enable istio
Have a look at the bookinfo example https://istio.io/latest/docs/examples/bookinfo/
Check you can see the VMs with virsh , if not you may need to access the system uri for libvirt
virsh list --all
export LIBVIRT_DEFAULT_URI=qemu:///system
This all seems to fall in a heap after a restart with a simgle node cluster.
Single Node: Add a extra disk for rook-ceph
sudo virsh vol-create-as --pool default --name minikube-rook.raw --format raw --capacity 40G --allocation 10M
sudo virsh attach-disk --domain minikube --source $(virsh vol-list --pool default | grep minikube-rook.raw|awk '{print $2}') --target vdb --cache none --persistent
Multi Node: Add a extra disk for rook-ceph to each node
kubectl get nodes -o=json |jq -r '.items[] | select(.metadata.name | test("minikube-")).metadata.name' | while read nodename
do
virsh vol-create-as --pool default --name ${nodename}-rook.raw --format raw --capacity 20G --allocation 10M
virsh attach-disk --domain ${nodename} --source $(virsh vol-list --pool default | grep ${nodename}-rook.raw|awk '{print $2}') --target vdb --cache none --persistent
done
Start and stop the vm so that it sees the disk
minikube stop
minikube start
Clone the rook repoa dn checkout the latest release
git clone https://github.com/rook/rook
cd rook
git checkout release-1.6
Apply the manifests
kubectl apply -f cluster/examples/kubernetes/ceph/crds.yaml
kubectl apply -f cluster/examples/kubernetes/ceph/common.yaml
kubectl apply -f cluster/examples/kubernetes/ceph/operator.yaml
Single node cluster
kubectl apply -f cluster/examples/kubernetes/ceph/cluster-test.yaml
Multi node cluster
kubectl apply -f cluster/examples/kubernetes/ceph/cluster.yaml
Watch the logs
kubectl logs -n rook-ceph -l app=rook-ceph-operator -f
Access the dashboard
-
Get the
admin
passwordkubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode && echo
-
Create a port-forward
kubectl --namespace rook-ceph port-forward service/rook-ceph-mgr-dashboard 7000:7000
-
Access the UI in a browser. Username is
admin
Create an object store (s3)
kubectl apply -f cluster/examples/kubernetes/ceph/object-test.yaml
kubectl apply -f cluster/examples/kubernetes/ceph/storageclass-bucket-delete.yaml
kubectl apply -f cluster/examples/kubernetes/ceph/object-bucket-claim-delete.yaml