Skip to content

Instantly share code, notes, and snippets.

@cpanato
Last active June 10, 2022 13:52
Show Gist options
  • Save cpanato/47a4409038d40e10ad6165239ac5fdbb to your computer and use it in GitHub Desktop.
Save cpanato/47a4409038d40e10ad6165239ac5fdbb to your computer and use it in GitHub Desktop.
CAPG / CAPI bootstrap with clusterctl

CAPG / CAPI bootstrap with clusterctl

This example uses the latest clusterctl (v1.0.0) and also the latest CAPG release with suports v1beta1 (v1.0.0)

steps to get a running workload cluster, for testing/development purposes

this is a quick overview for more in depth you can check https://cluster-api.sigs.k8s.io/user/quick-start.html

  1. create a kind cluster
$ kind create cluster --image kindest/node:v1.22.1 --wait 5m
  1. export the required variables
$ export GCP_B64ENCODED_CREDENTIALS=$( cat /path/to/gcp-credentials.json | base64 | tr -d '\n' )
$ export GCP_CONTROL_PLANE_MACHINE_TYPE=n1-standard-2
$ export GCP_NODE_MACHINE_TYPE=n1-standard-2
$ export GCP_PROJECT=<YOUR GCP PROJECT>
$ export GCP_REGION=us-east4
$ export IMAGE_ID=<YOUR IMAGE>
$ export GCP_NETWORK_NAME=default
$ export CLUSTER_NAME=test # you can choose any name this is used for this example
  1. setup the network in this example we are using the default network so we will create some router/nats for our workload cluster to have internet access
$ gcloud compute routers create "${CLUSTER_NAME}-myrouter" --project="${GCP_PROJECT}" --region="${GCP_REGION}" --network="default"

$ gcloud compute routers nats create "${CLUSTER_NAME}-mynat" --project="${GCP_PROJECT}" --router-region="${GCP_REGION}" --router="${CLUSTER_NAME}-myrouter" --nat-all-subnet-ip-ranges --auto-allocate-nat-external-ips
  1. deploy CAPI/CAPG
$ clusterctl init --infrastructure gcp
  1. Generate the workload cluster config and apply it
$ clusterctl generate cluster $CLUSTER_NAME --kubernetes-version v1.22.3 > workload-test.yaml

$ kubectl apply -f workload-test.yaml
  1. you can check the capg manager logs / you can watch the gcp console the control plane vm should be up and running soon

  2. checks

$ clusterctl describe cluster $CLUSTER_NAME
NAME                                                               READY  SEVERITY  REASON                 SINCE  MESSAGE
/test                                                              False  Info      WaitingForKubeadmInit  5s
├─ClusterInfrastructure - GCPCluster/test
└─ControlPlane - KubeadmControlPlane/test-control-plane            False  Info      WaitingForKubeadmInit  5s
  └─Machine/test-control-plane-x57zs                               True                                    31s
    └─MachineInfrastructure - GCPMachine/test-control-plane-7xzw2
    
$ kubectl get kubeadmcontrolplane
NAME                 CLUSTER   INITIALIZED   API SERVER AVAILABLE   REPLICAS   READY   UPDATED   UNAVAILABLE   AGE    VERSION
test-control-plane   test                                           1                  1         1             2m9s   v1.22.3
  1. Get the kubeconfig for the workload cluster
$ clusterctl get kubeconfig $CLUSTER_NAME

$ clusterctl get kubeconfig $CLUSTER_NAME > workload-test.kubeconfig
  1. apply the cni
$ kubectl --kubeconfig=./workload-test.kubeconfig \
  apply -f https://docs.projectcalico.org/v3.20/manifests/calico.yaml

configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created
  1. wait a bit and you should see this when get the kubeadmcontrolplane
$ kubectl get kubeadmcontrolplane
NAME                 CLUSTER   INITIALIZED   API SERVER AVAILABLE   REPLICAS   READY   UPDATED   UNAVAILABLE   AGE     VERSION
test-control-plane   test      true          true                   1          1       1         0             6m33s   v1.22.3


$ kubectl get nodes --kubeconfig=./workload-test.kubeconfig
NAME                       STATUS   ROLES                  AGE   VERSION
test-control-plane-7xzw2   Ready    control-plane,master   62s   v1.22.3
  1. edit the MachineDeployment in the workload-test.yaml it have 0 replicas add the replicas you want to have your nodes, in this case we used 2

  2. apply the `workload-test.yaml``

  3. after a few minutes you should have all up and running

$ clusterctl describe cluster $CLUSTER_NAME
NAME                                                               READY  SEVERITY  REASON  SINCE  MESSAGE
/test                                                              True                     15m
├─ClusterInfrastructure - GCPCluster/test
├─ControlPlane - KubeadmControlPlane/test-control-plane            True                     15m
│ └─Machine/test-control-plane-x57zs                               True                     19m
│   └─MachineInfrastructure - GCPMachine/test-control-plane-7xzw2
└─Workers
  └─MachineDeployment/test-md-0                                    True                     10m
    └─2 Machines...                                                True                     13m    See test-md-0-68bd55744b-qpk67, test-md-0-68bd55744b-tsgf6

$ kubectl get nodes --kubeconfig=./workload-test.kubeconfig
NAME                       STATUS   ROLES                  AGE   VERSION
test-control-plane-7xzw2   Ready    control-plane,master   21m   v1.22.3
test-md-0-b7766            Ready    <none>                 17m   v1.22.3
test-md-0-wsgpj            Ready    <none>                 17m   v1.22.3
  1. this is a usual k8s cluster you can deploy your apps and whatever you want

  2. to delete the workload cluster

$ kubectl delete cluster $CLUSTER_NAME
  1. delete the router/nat
$ gcloud compute routers nats delete "${CLUSTER_NAME}-mynat" --project="${GCP_PROJECT}" \
    --router-region="${GCP_REGION}" --router="${CLUSTER_NAME}-myrouter"

$ gcloud compute routers delete "${CLUSTER_NAME}-myrouter" --project="${GCP_PROJECT}" \
    --region="${GCP_REGION}"
  1. delete kind
$ kind delete cluster
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment