install minikube, run minikube with enough CPUs and 2 extra disks (for 2 OSDs):
$ minikube start --cpus 6 --extra-disks=2 --driver=kvm2
install kubectl and use from from the host:
$ eval $(minikube docker-env)
- install cert-manager:
$ kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.11.0/cert-manager.yaml
- install jaeger in the observability namespace:
$ kubectl create namespace observability
$ kubectl apply -f https://github.com/jaegertracing/jaeger-operator/releases/download/v1.42.0/jaeger-operator.yaml -n observability
- create a simple all-in-one pod:
$ cat << EOF | kubectl apply -f -
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: simplest
namespace: observability
EOF
- expose the query api as a
NodePort
service:
$ cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: simplest-query-external
namespace: observability
labels:
app: jaeger
app.kubernetes.io/component: service-query
app.kubernetes.io/instance: simplest
app.kubernetes.io/managed-by: jaeger-operator
app.kubernetes.io/name: simplest-query
app.kubernetes.io/part-of: jaeger
spec:
ports:
- name: http-query
port: 16686
protocol: TCP
targetPort: 16686
selector:
app.kubernetes.io/component: all-in-one
app.kubernetes.io/instance: simplest
app.kubernetes.io/managed-by: jaeger-operator
app.kubernetes.io/name: simplest
app.kubernetes.io/part-of: jaeger
app: jaeger
sessionAffinity: None
type: NodePort
EOF
- make sure there are disks without a filesystem:
$ minikube ssh lsblk
- download and install rook operator (use v1.10):
$ git clone -b release-1.10 https://github.com/rook/rook.git
$ cd rook/deploy/examples
$ kubectl create -f crds.yaml -f common.yaml
in operator.yaml
increase debug level:
data:
# The logging level for the operator: ERROR | WARNING | INFO | DEBUG
ROOK_LOG_LEVEL: "DEBUG"
then apply the oprator:
$ kubectl create -f operator.yaml
use a developer build of ceph that supports tracing. to do that edit cluster-test.yaml
and replace the line:
image: quay.io/ceph/ceph:v17
with:
image: quay.ceph.io/ceph-ci/ceph:wip-yuval-full-putobj-trace
add the following jaeger argumnets in the ConfigMap
in cluster-test.yaml
under the [global]
section:
jaeger_tracing_enable = true
jaeger_agent_port = 6831
add annotations to the cluster. so that jaeger will inject an agent side-car to OSD pods:
spec:
annotations:
osd:
sidecar.jaegertracing.io/inject: "true"
and apply the cluster:
$ kubectl create -f cluster-test.yaml
start the object store:
kubectl create -f object-test.yaml
- add annotations to the object store. so that jaeger will inject an agent side-car to RGW pods:
gateway:
annotations:
sidecar.jaegertracing.io/inject: "true"
- we will create storage class and a bucket:
$ kubectl create -f storageclass-bucket-delete.yaml
$ kubectl create -f object-bucket-claim-delete.yaml
- create a service so that it could be accessed from outside of k8s:
$ cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: rook-ceph-rgw-my-store-external
namespace: rook-ceph
labels:
app: rook-ceph-rgw
rook_cluster: rook-ceph
rook_object_store: my-store
spec:
ports:
- name: rgw
port: 80
protocol: TCP
targetPort: 8080
selector:
app: rook-ceph-rgw
rook_cluster: rook-ceph
rook_object_store: my-store
sessionAffinity: None
type: NodePort
EOF
- fetch the URL that allow access to the RGW service from the host running the minikube VM:
$ export AWS_URL=$(minikube service --url rook-ceph-rgw-my-store-external -n rook-ceph)
- user credentials and bucket name:
$ export AWS_ACCESS_KEY_ID=$(kubectl -n default get secret ceph-delete-bucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)
$ export AWS_SECRET_ACCESS_KEY=$(kubectl -n default get secret ceph-delete-bucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)
$ export BUCKET_NAME=$(kubectl get objectbucketclaim ceph-delete-bucket -o jsonpath='{.spec.bucketName}')
- now use them to upload an object:
$ echo "hello world" > hello.txt
$ aws --endpoint-url "$AWS_URL" s3 cp hello.txt s3://"$BUCKET_NAME"
- fetch the URL that allow access to the jaeger query service from the host running the minikube VM:
$ export JAEGER_URL=$(minikube service --url simplest-query-external -n observability)
- query traces:
$ curl "$JAEGER_URL/api/traces?service=rgw&limit=20&lookback=1h" | jq
- delete the objects uploaded to the bucket:
$ aws --endpoint-url "$AWS_URL" s3 rm s3://"$BUCKET_NAME"/hello.txt
- delete the OBC:
$ kubectl delete obc ceph-delete-bucket
- delete the object store:
$ kubectl -n rook-ceph delete cephobjectstore my-store
- delete the cluster
$ kubectl -n rook-ceph delete CephBlockPool builtin-mgr
$ kubectl -n rook-ceph delete cephcluster my-cluster
if this does not work, kill the k8s cluster :-)
$ minikube stop
$ minikube delete
I created a bigger VM
Docker runtime looks buggy