Skip to content

Instantly share code, notes, and snippets.

@robraux
Last active December 8, 2020 21:12
Show Gist options
  • Save robraux/6f30fb4db33f411de3472178831d0175 to your computer and use it in GitHub Desktop.
Save robraux/6f30fb4db33f411de3472178831d0175 to your computer and use it in GitHub Desktop.

Overview

This document outlines the failure scenario when using the Kuma service mesh in Kubernetes mode with a Loft.sh vCluster. The Kuma demo app behaves correctly in bare k3s, Kind, minikube, and bare AWS k8s, and the Loft AWS k8 cluster directly, for certain.

  • Install and configure Loft. It may be possible to sidestep this setup by using Kiosk directly, but I have not done so, nor does my use case warrant that.
  • Create vcluster (k3s inside k8).
  • Install Kuma control plane
  • Apply kuma-demo app
  • Observe issues

Pre-Requirements

  • Any remote Kubernetes cluster
    • I'm using AWS EKS v1.18.9-eks-d1db3c
  • kubectl (check via kubectl version)
    • I'm using Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-14T14:49:35Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.12+k3s1", GitCommit:"56cd36302dc3188f21f9877d1309df7d80cd8b7d", GitTreeState:"clean", BuildDate:"2020-11-13T06:12:38Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
  • helm v3 (check with helm version)
    • I'm using version.BuildInfo{Version:"v3.4.1", GitCommit:"c4e74854886b2efe3321e185578e6db9be0a6e29", GitTreeState:"dirty", GoVersion:"go1.15.5"}
  • A kube-context with admin access to this Kubernetes cluster (check with kubectl auth can-i create clusterrole -A)
    • I'm yes
  • kumactl installed and in path

Reproduction Steps

  1. Install and configure Loft with the single remote cluster configuration.

You may need to update <LOFTURL>/clusters/<CLUSTERNAME>/templates - loft-limit-range to lower the min memory requirement below 10Mi.

Addl info in case it's relevant:

Enabled: kiosk, cert-manager, ingress-nginx Cluster Account Templates: loft-all-cluster-default User Role: loft-management-admin k8 Token Group: system:masters

User quota well above any limits.

  1. Create virtual cluster inside of your bare cluster. This will create a new Space and vCluster.

loft create vcluster kuma-demo

[done] √ Successfully created the virtual cluster kuma-demo in cluster loft-cluster and space vcluster-kuma-demo
[done] √ Successfully updated kube context to use virtual cluster kuma-demo in space vcluster-kuma-demo and cluster loft-cluster

Your context should now be set automatically to this new cluster.

kubectl config get-contexts

*         loft-vcluster_kuma-demo_vcluster-kuma-demo_loft-cluster                       loft-vcluster_kuma-demo_vcluster-kuma-demo_loft-cluster                       loft-vcluster_kuma-demo_vcluster-kuma-demo_loft-cluster
  1. Install the kuma control plane: kumactl install control-plane | kubectl apply -f -
serviceaccount/kuma-control-plane created
secret/kuma-tls-cert created
configmap/kuma-control-plane-config created
customresourcedefinition.apiextensions.k8s.io/circuitbreakers.kuma.io created
customresourcedefinition.apiextensions.k8s.io/dataplanes.kuma.io created
customresourcedefinition.apiextensions.k8s.io/trafficlogs.kuma.io created
customresourcedefinition.apiextensions.k8s.io/trafficpermissions.kuma.io created
customresourcedefinition.apiextensions.k8s.io/trafficroutes.kuma.io created
customresourcedefinition.apiextensions.k8s.io/traffictraces.kuma.io created
customresourcedefinition.apiextensions.k8s.io/zoneinsights.kuma.io created
customresourcedefinition.apiextensions.k8s.io/zones.kuma.io created
customresourcedefinition.apiextensions.k8s.io/dataplaneinsights.kuma.io created
customresourcedefinition.apiextensions.k8s.io/externalservices.kuma.io created
customresourcedefinition.apiextensions.k8s.io/faultinjections.kuma.io created
customresourcedefinition.apiextensions.k8s.io/healthchecks.kuma.io created
customresourcedefinition.apiextensions.k8s.io/meshinsights.kuma.io created
customresourcedefinition.apiextensions.k8s.io/meshes.kuma.io created
customresourcedefinition.apiextensions.k8s.io/proxytemplates.kuma.io created
customresourcedefinition.apiextensions.k8s.io/serviceinsights.kuma.io created
clusterrole.rbac.authorization.k8s.io/kuma-control-plane created
clusterrolebinding.rbac.authorization.k8s.io/kuma-control-plane created
role.rbac.authorization.k8s.io/kuma-control-plane created
rolebinding.rbac.authorization.k8s.io/kuma-control-plane created
service/kuma-control-plane created
deployment.apps/kuma-control-plane created
mutatingwebhookconfiguration.admissionregistration.k8s.io/kuma-admission-mutating-webhook-configuration created
validatingwebhookconfiguration.admissionregistration.k8s.io/kuma-validating-webhook-configuration created
  1. Install the Kuma demo app. This could be any app which has kuma.io/sidecar-injection: enabled

kubectl apply -f https://bit.ly/demokuma

deployment.apps/postgres-master created
service/postgres created
deployment.apps/redis-master created
service/redis created
service/backend created
deployment.apps/kuma-demo-backend-v0 created
deployment.apps/kuma-demo-backend-v1 created
deployment.apps/kuma-demo-backend-v2 created
service/frontend created
deployment.apps/kuma-demo-app created
  1. Observe the kuma-sidecar behavior, specifically the Envoy proxy start attempts. Note that the pods will restart as the ready/live probes are going to fail since they are intended to pass through Envoy proxy.

Review the pods created, they should all be 1/2 as the sidecar won't initialize fully. kubectl get pods -n kuma-demo

NAME READY STATUS RESTARTS AGE
postgres-master-65df766577-l76bm       1/2     Running   1          2m30s
kuma-demo-backend-v0-d7cb6b576-z5j6v   1/2     Running   1          2m29s
redis-master-78ff699f7-f4nvf           1/2     Running   1          2m29s
kuma-demo-app-69c9fd4bd-27whm          1/2     Running   1          2m28s

Observe the kuma-sidecar logs: kubectl logs pod/kuma-demo-app-69c9fd4bd-27whm -n kuma-demo -c kuma-sidecar

2020-12-08T16:43:32.638Z	INFO	kuma-dp.run	effective configuration	{"config": "{\"controlPlane\":{\"caCert\":\"-----BEGIN CERTIFICATE-----\\nMIIDEDCCAfigAwIBAgIRAIr0UqJqos/Ud81LVnR+k6YwDQYJKoZIhvcNAQELBQAw\\nEjEQMA4GA1UEAxMHa3VtYS1jYTAeFw0yMDEyMDgxNjM4MTRaFw0zMDEyMDYxNjM4\\nMTRaMBIxEDAOBgNVBAMTB2t1bWEtY2EwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw\\nggEKAoIBAQDOmHHVAd1CHf2Lw56CzJGwWnxo3C4HpiRCjwrDiv+miT1s5A6q5bmz\\nyUv9K8IOytB4ML7pbUlTPuIQXzGVRkMY5117TkEq4Pxt0WE0LYkHWNfRG3Fyd/d3\\nsklQ5PCKzy2SNC3n2+ZjFoIOEX53pEcQrgcxu1IAJncv5b5dBH/aDboRhsZcINkn\\nQpyA+/YujXcpH59xkPakPjl3EQHzz153vuwo4iZaHjNtxIixtG6zX+vf2WjVO7xz\\n9lJ9aFnyYZKC6OOkfQ5kRqV+xrm9cIrOmlNhUwGPHL/wQkanbyvKh+LkPhsjbEnJ\\nvY9gf5HnhmxlMM6LNV0tlGhKrt6v8a3LAgMBAAGjYTBfMA4GA1UdDwEB/wQEAwIC\\npDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDwYDVR0TAQH/BAUwAwEB\\n/zAdBgNVHQ4EFgQUD9RKyzcPd2OJxkNtm5MgZNDjmkQwDQYJKoZIhvcNAQELBQAD\\nggEBAFheBphBMGKMyWD6oNyazrRfGwpwRFOP+wVssZ2snVdKNI+ynrGjmyC1xsYH\\ncu9J7MJbWNPyLkmvWdZgoIRoRxpLDQpClIjkYqZ8JvIcKXtkww34C7QYE96Eqy5R\\noKpS5+ODZS2Pp/VEkRkpYOqxiBI6d5JitTJYkqI+CSdA6B3HTKv3EJW+mW+AvRpV\\nNyMmCil0QWapVhbRnqWuK1smtKx0lCuoCqt8fm3M1sygWpQ2PUtNoqkcX6zYcN1t\\n6DXhWzMRCez4idKkZZS1hYzHeRHlk4KioJBYY/eilcXl44DTifJAaco5AlI4wQiQ\\nUP2bZGz6OqRW8arBPR7ZEq72sYY=\\n-----END CERTIFICATE-----\\n\",\"caCertFile\":\"\",\"retry\":{\"backoff\":\"3s\",\"maxDuration\":\"5m0s\"},\"url\":\"https://kuma-control-plane.kuma-system:5678\"},\"dataplane\":{\"drainTime\":\"30s\",\"mesh\":\"default\",\"name\":\"$(POD_NAME).$(POD_NAMESPACE)\"},\"dataplaneRuntime\":{\"binaryPath\":\"envoy\",\"dataplaneTokenPath\":\"/var/run/secrets/kubernetes.io/serviceaccount/token\"}}"}
2020-12-08T16:43:32.638Z	INFO	kuma-dp.run	picked a free port for Envoy Admin API to listen on	{"port": "9901"}
2020-12-08T16:43:32.638Z	INFO	kuma-dp.run	generated Envoy configuration will be stored in a temporary directory	{"dir": "/tmp/kuma-dp-710500977"}
2020-12-08T16:43:32.638Z	INFO	kuma-dp.run	starting Kuma DP	{"version": "1.0.3"}
2020-12-08T16:43:32.639Z	INFO	kuma-dp.run.envoy	generating bootstrap configuration
2020-12-08T16:43:32.639Z	INFO	accesslogs-server	starting Access Log Server	{"address": "unix:///tmp/b8efd1fb-6f07-4fad-bed9-75428515a76e.sock"}
2020-12-08T16:43:32.639Z	INFO	dataplane	trying to fetch bootstrap configuration from the Control Plane
2020-12-08T16:43:32.650Z	INFO	dataplane	Dataplane entity is not yet found in the Control Plane. If you are running on Kubernetes, CP is most likely still in the process of converting Pod to Dataplane. Retrying.	{"backoff": "3s"}

The failure appears to be coming from an issue fetching the Envoy Proxy config when making the call to https://kuma-control-plane.kuma-system:5678/bootstrap endpoint when kuma-dp is running in the sidecar.

https://github.com/kumahq/kuma/blob/a872b19193ee26a2182059af5f686c104ccd8001/app/kuma-dp/pkg/dataplane/envoy/remote_bootstrap.go#L141

Cleanup

Removing loft from the cluster: https://loft.sh/docs/guides/administration/uninstall

@robraux
Copy link
Author

robraux commented Dec 8, 2020

KUMA_DATAPLANE_NAME does not appear to be set correctly, causing the call to /bootstrap to return 404 as it looks like this is the value pulled into the POST call.

export KUMA_DATAPLANE_ADMIN_PORT='9901'
export KUMA_DATAPLANE_DRAIN_TIME='30s'
export KUMA_DATAPLANE_MESH='default'
export KUMA_DATAPLANE_NAME='$(POD_NAME).$(POD_NAMESPACE)'
export KUMA_DATAPLANE_RUNTIME_TOKEN_PATH='/var/run/secrets/kubernetes.io/serviceaccount/token'
export LANG='C.UTF-8'
export PATH='/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
export POD_NAME='kuma-demo-app-7dfd4dfbd4-dt4qm'
export POD_NAMESPACE='kuma-demo'```

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment