k8s_name=kind-$(date +"%y%m%d%H%M")
cat <<EOF | kind create cluster --config -
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: ${k8s_name}
nodes:
- role: control-plane
EOF
k8s_name_tlspk=$(tr "-" "_" <<< ${k8s_name})
jsctl clusters connect ${k8s_name_tlspk}
jsctl operator deploy --auto-registry-credentials
jsctl operator installations apply --auto-registry-credentials --cert-manager-replicas 1 --csi-driver
In order to set up a TLSPC Issuer, you must first create a Kubernetes Secret resource containing your TLSPC API credentials. This key can be generated/obtained from https://ui.venafi.cloud/platform-settings/user-preferences
apikey=<YOUR_TLSPC_API_KEY>
kubectl create secret generic \
tlspc-secret \
--namespace=jetstack-secure \
--from-literal=apikey="${apikey}"
Note we can use jetstack-secure
to hold a secret for a ClusterIssuer object because when cert-manager pod/container is launched from jsctl
it uses the --cluster-resource-namespace=$(POD_NAMESPACE)
override.
You can add new Issuer resources to your cluster by editing the Installation manifest. Open the manifest, in whatever EDITOR you have configured, as follows.
kubectl edit Installation installation
You can add a TLSPC ClusterIssuer by inserting the following snippet into the spec:
section of the Installation manifest.
issuers:
- clusterScope: true
name: tlspc
venafi:
zone: "Built-In CA\\Built-In CA Template"
cloud:
apiTokenSecretRef:
name: tlspc-secret
key: apikey
Saving the file will apply those changes.
Ask cert-manager to create a test certificate, referencing the TLSPC ClusterIssuer
kubectl create namespace tests
cat << EOF | kubectl -n tests apply -f -
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: www.cm-tlspc-test.com
spec:
secretName: www-cm-tlspc-test-com-tls
commonName: www.cm-tlspc-test.com # Common Name is a Venafi baseline requirement
dnsNames:
- www.cm-tlspc-test.com
issuerRef:
name: tlspc
kind: ClusterIssuer
group: cert-manager.io
EOF
The new cert should appear in TLSPC under https://ui.venafi.cloud/certificate-issuance/certificates-inventory.
It is also represented in your cluster by a trio of Kubernetes objects.
kubectl -n demos get certificates,certificaterequests,secrets
Tidy up before moving on.
kubectl delete namespace tests
kubectl create namespace demos
cat << EOF | kubectl -n demos apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: csi-driver-demo
labels:
app: csi-driver-demo
spec:
replicas: 1
revisionHistoryLimit: 0
selector:
matchLabels:
app: csi-driver-demo
template:
metadata:
labels:
app: csi-driver-demo
spec:
containers:
- name: busybox
image: busybox
imagePullPolicy: Always
volumeMounts:
- mountPath: "/tls"
name: tls
command: [ "sleep", "1000000" ]
resources:
requests:
memory: 100Mi
cpu: 100m
volumes:
- name: tls
csi:
driver: csi.cert-manager.io
readOnly: true
volumeAttributes:
csi.cert-manager.io/issuer-kind: ClusterIssuer
csi.cert-manager.io/issuer-name: tlspc
csi.cert-manager.io/dns-names: \${POD_NAME}.\${POD_NAMESPACE}.svc.cluster.local
csi.cert-manager.io/common-name: \${POD_NAME}.\${POD_NAMESPACE}.svc.cluster.local
EOF
There should be no Certificates or related Secrets in the namespace, however CertificateRequests (CRs) may remain.
kubectl -n demos get certificates,certificaterequests,secrets
In the absence of matching Certificate objects, TLSPK treats CR objects as if they were Certificate objects. Unlike Certificate objects, CR objects do not stick around forever, and when they are eventually harvested, TLSPK will memorialize their existence by marking them as "ephemeral" (trash can icon).
There should be one certificate per container, mounted inside.
kubectl -n demos exec -it deploy/csi-driver-demo -- ls -l /tls
This cert should appear in TLSPC under https://ui.venafi.cloud/certificate-issuance/certificates-inventory.
Try changing the deployment scale.
kubectl -n demos scale deploy csi-driver-demo --replicas 3
Check again in TLSPC. You should see one cert per pod replica. The one-cert-per-pod model you see here is a clear example of true machine identity in action. This is why the cert-manager csi-driver is so important.
The quickest way to undo ALL of the above is to destroy the KinD cluster and delete its registration entry in TLSPK
kind delete cluster --name ${k8s_name}
jsctl clusters delete ${k8s_name_tlspk} --force
Remember to mop up dangling service account for the cluster in JSS