Skip to content

Instantly share code, notes, and snippets.

@akutz
Created December 18, 2019 21:27
Show Gist options
  • Save akutz/754c45f717d88350e78eb3ce306af537 to your computer and use it in GitHub Desktop.
Save akutz/754c45f717d88350e78eb3ce306af537 to your computer and use it in GitHub Desktop.
Debugging CAPV with Andrew's quiver
# The manifests image to use.
export CAPV_MANIFESTS_IMAGE="YOUR_MANIFESTS_IMAGE"
# The name of the cluster to create. This impacts the name of the Kind
# bootstrap cluster as well.
export CLUSTER_NAME="YOUR_CLUSTER_NAME" && \
export KUBECONFIG="$(kind get kubeconfig-path --name "${CLUSTER_NAME}")"
# Generate the manifests
source envvars.txt && \
rm -fr "out/${CLUSTER_NAME}" && \
docker run --rm \
-v "$(pwd)":/out \
-v "$(pwd)/envvars.txt":/envvars.txt:ro \
"${CAPV_MANIFESTS_IMAGE}" \
-c "${CLUSTER_NAME}" \
-M 4
#
# WARNING
#
# Remove the first line of the next bit if you don't have govc configured
# to talk to your vSphere environment.
#
# Ensure the previous control plane node is gone, removes a possibly dangling
# bootstrap cluster, and uses cluserctl and kind to bootstrap a new cluster.
# Please note the "--bootstrap-flags" flag that ensures clusterctl uses a
# determinisitc name when creating the bootstrap cluster with Kind. Otherwise
# the static string "clusterapi" is no longer used, but it's now just a prefix
# with a dynamic suffix.
{ govc vm.destroy "${CLUSTER_NAME}-controlplane-0" 2>/dev/null || true; } && \
{ kind delete cluster --name "${CLUSTER_NAME}" 2>/dev/null || true; } && \
time bin/clusterctl create cluster \
-a ./out/"${CLUSTER_NAME}"/addons.yaml \
-c ./out/"${CLUSTER_NAME}"/cluster.yaml \
-m ./out/"${CLUSTER_NAME}"/controlplane.yaml \
-p ./out/"${CLUSTER_NAME}"/provider-components.yaml \
--kubeconfig-out ./out/"${CLUSTER_NAME}"/kubeconfig \
--bootstrap-type kind \
--bootstrap-flags name="${CLUSTER_NAME}" \
-v 4
# The name of the cluster to create. This impacts the name of the Kind
# bootstrap cluster as well.
export CLUSTER_NAME="YOUR_CLUSTER_NAME" && \
export KUBECONFIG="$(kind get kubeconfig-path --name "${CLUSTER_NAME}")"
# This will try to tail the CAPV manager logs in the bootstrap cluster
# until the CAPV manager in the bootstrap cluster has logs to tail. This
# will print errors BEFORE the bootstrap cluster and mangager exists and
# AFTER the manager and bootstrap cluster are gone.
while ! kubectl -n capv-system logs \
$(kubectl get pods -n capv-system | tail -n1 | awk '{print $1}') -f || \
true; do sleep 5; done
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment