You will see the following cert-manager CSI drivers side-by-side:
Create a disposable KinD cluster as follows.
nickname=<YOUR_NICKNAME>
k8s_name=${nickname}-$(date +"%y%m%d%H%M")
cat <<EOF | kind create cluster --config -
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: ${k8s_name}
nodes:
- role: control-plane
EOF
NOTE currently unable to get k3d clusters to work with csi-drivers.
NOTE the TLSPK helper script is not the only way to prepare Kubernetes clusters with TLSPK, so feel free to complete these steps with whatever tools you choose.
From a Bash/Zsh session, download the TLSPK helper script.
cd ${HOME}
curl -fsSLO https://venafi-ecosystem.s3.amazonaws.com/tlspk/v1/tlspk-helper.sh && chmod 700 tlspk-helper.sh
Set the TLSPK service account credentials into environment variables. These can be generated via https://platform.jetstack.io/org/PLACE_ORG_NAME_HERE/manage/service_accounts
export TLSPK_SA_USER_ID=<ID>@<ORG>.platform.jetstack.io
export TLSPK_SA_USER_SECRET='<USER_SECRET>' # leave the quotes in place to preserve any control chars in the user secret
You may securely check these variables are in place as follows.
env | grep '^TLSPK_' | awk -F '=' '{print $1"=<redacted>"}'
The following steps will deploy the TLSPK Operator and cert-manager.
./tlspk-helper.sh install-operator --auto-approve
./tlspk-helper.sh deploy-operator-components --auto-approve
Confirm that baseline TLSPK components successfully installed.
kubectl -n jetstack-secure get deploy
NOTE the TLSPK agent is ommitted as it's not required for this demo.
The documentation here states the following.
Note it is not possible to use SelfSigned Issuers with the CSI Driver. In order for cert-manager to self sign a certificate, it needs access to the secret containing the private key that signed the certificate request to sign the end certificate. This secret is not used and so not available in the CSI driver use case.
Looking beyond the self-signed option, the next most simple cert-manager issuer option is CA, which uses a secret. The TLSPK Operator will do the necessary plumbing to set this up.
Deploy the CSI drivers by making an adjustment to the TLSPK Installation manifest.
kubectl patch installation jetstack-secure --type merge --patch-file <(cat << EOF
spec:
issuers:
- clusterScope: true
name: ca
ca:
secretName: ca
selfSignedCA:
commonName: ca
subject:
organizations:
- cluster.local # <-- trust domain(?)
csiDrivers:
certManager: {} # <-- csi-driver (also uses "ca" issuer, but via pod specs)
certManagerSpiffe: # <-- csi-driver-spiffe
issuerRef:
name: ca
kind: ClusterIssuer
EOF
)
Confirm these components were installed successfully.
kubectl -n jetstack-secure get deploy,ds,clusterissuers
The next section demos csi-driver
kubectl create namespace csi-demos
cat << EOF | kubectl -n csi-demos apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: csi-demo
labels:
app: csi-demo
spec:
replicas: 1
selector:
matchLabels:
app: csi-demo
template:
metadata:
labels:
app: csi-demo
spec:
containers:
- name: busybox
image: busybox
volumeMounts:
- mountPath: "/tls"
name: tls
command: [ "sleep", "1000000" ]
resources:
requests:
memory: 100Mi
cpu: 100m
volumes:
- name: tls
csi:
driver: csi.cert-manager.io
readOnly: true
volumeAttributes:
csi.cert-manager.io/issuer-kind: ClusterIssuer
csi.cert-manager.io/issuer-name: ca
csi.cert-manager.io/dns-names: \${POD_NAME}.\${POD_NAMESPACE}.svc.cluster.local
csi.cert-manager.io/common-name: \${POD_NAME}.\${POD_NAMESPACE}.svc.cluster.local
EOF
Note because ${POD_NAME}
(resolved internally) is unique for each pod in the context of a k8s deployment we have created a pure form of machine identity.
In other words we get exactly one cert per pod, for example, "csi-driver-demo-869c669f88-2jths.demos.svc.cluster.local"
Decode the certificate mounted inside the pod to reveal the DNS name
csifile=$(mktemp)
(kubectl -n csi-demos exec -it deploy/csi-demo -- cat /tls/..data/tls.crt) > ${csifile}
openssl x509 -in ${csifile} -text -noout | grep 'DNS:' # SUCCESS!
openssl x509 -in ${csifile} -text -noout | grep 'URI:' # FAIL!
Example output:
DNS:csi-demo-568d7c6655-n5kf6.csi-demos.svc.cluster.local
The next section demos csi-driver-spiffe
kubectl create namespace spiffe-demos
cat << EOF | kubectl -n spiffe-demos apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: spiffe-sa
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: spiffe-role
rules:
- apiGroups: ["cert-manager.io"]
resources: ["certificaterequests"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: spiffe-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: spiffe-role
subjects:
- kind: ServiceAccount
name: spiffe-sa
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: spiffe-demo
labels:
app: spiffe-demo
spec:
replicas: 1
selector:
matchLabels:
app: spiffe-demo
template:
metadata:
labels:
app: spiffe-demo
spec:
serviceAccountName: spiffe-sa
containers:
- name: busybox
image: busybox
volumeMounts:
- mountPath: "/var/run/secrets/spiffe.io"
name: spiffe
command: [ "sleep", "1000000" ]
resources:
requests:
memory: 100Mi
cpu: 100m
volumes:
- name: spiffe
csi:
driver: spiffe.csi.cert-manager.io
readOnly: true
EOF
Decode the certificate mounted inside the pod to reveal the URI name
spiffefile=$(mktemp)
(kubectl -n spiffe-demos exec -it deploy/spiffe-demo -- cat /var/run/secrets/spiffe.io/tls.crt) > ${spiffefile}
openssl x509 -in ${spiffefile} -text -noout | grep 'DNS:' # FAIL!
openssl x509 -in ${spiffefile} -text -noout | grep 'URI:' # SUCCESS!
Example output:
URI:spiffe://cluster.local/ns/spiffe-demos/sa/spiffe-sa
Note It's likely you'll want a distinct Service Account (SA) per deployment as this makes the SPIFFE ID unique per deployment. That said, given that 100's of pod replicas may then share the same ID, do they really qualify as machine identities?