Skip to content

Instantly share code, notes, and snippets.

@govindkailas
Last active March 8, 2024 08:13
Show Gist options
  • Save govindkailas/07ddd6d58940f2151c75fb2ee36332f7 to your computer and use it in GitHub Desktop.
Save govindkailas/07ddd6d58940f2151c75fb2ee36332f7 to your computer and use it in GitHub Desktop.
Deploying a CAPD cluster
# An attempt to create a CAPD cluster as explained in https://cluster-api.sigs.k8s.io/user/quick-start.html
# config.yaml for kind cluster which maps host docker socket into the kind cluster node
cat >config.yaml <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraMounts:
- hostPath: /var/run/docker.sock
containerPath: /var/run/docker.sock
EOF
# create the kind management cluster
kind create cluster --config config.yaml
# Install cluserctl
https://cluster-api.sigs.k8s.io/user/quick-start.html#install-clusterctl
# Initialize the management cluster
gkailathuvalap@gkailathuva-a01:~/k8s/cluster-api$ clusterctl init --infrastructure docker
Fetching providers
Installing cert-manager Version="v1.5.3"
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v1.0.1" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v1.0.1" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v1.0.1" TargetNamespace="capi-kubeadm-control-plane-system"
I1119 07:11:09.684195 42420 request.go:665] Waited for 1.018070321s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56340/apis/bootstrap.cluster.x-k8s.io/v1alpha4?timeout=30s
Installing Provider="infrastructure-docker" Version="v1.0.1" TargetNamespace="capd-system"
Your management cluster has been initialized successfully!
You can now create your first workload cluster by running the following:
clusterctl generate cluster [name] --kubernetes-version [version] | kubectl apply -f -
# Define your service and pod cidr,
# The list of service CIDR, default ["10.128.0.0/12"]
export SERVICE_CIDR=["10.96.0.0/12"]
# The list of pod CIDR, default ["192.168.0.0/16"]
export POD_CIDR=["192.168.0.0/16"]
# The service domain, default "cluster.local"
export SERVICE_DOMAIN="k8s.test"
# Generating the cluster configuration, now this can be adjusted as per your need
clusterctl generate cluster capi-docker --flavor development \
--kubernetes-version v1.22.0 \
--control-plane-machine-count=3 \
--worker-machine-count=3 \
> capi-docker.yaml
# Apply the generated cluster config
gkailathuvalap@gkailathuva-a01:~/k8s/cluster-api$ kubectl apply -f capi-docker.yaml
cluster.cluster.x-k8s.io/capi-docker created
dockercluster.infrastructure.cluster.x-k8s.io/capi-docker created
dockermachinetemplate.infrastructure.cluster.x-k8s.io/capi-docker-control-plane created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capi-docker-control-plane created
dockermachinetemplate.infrastructure.cluster.x-k8s.io/capi-docker-md-0 created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capi-docker-md-0 created
machinedeployment.cluster.x-k8s.io/capi-docker-md-0 created
# Get the kubeconfig of the cluster
clusterctl get kubeconfig capi-docker>capi-docker.kubeconfig
# If you are using Docker on MacOS, you will need to do a couple of additional steps to get the correct kubeconfig for a workload cluster created with the Docker provider. See Additional Notes for the Docker Provider. https://cluster-api.sigs.k8s.io/clusterctl/developers.html#additional-notes-for-the-docker-provider
# Fix kubeconfig (when using docker on MacOS)
# When using docker on MacOS, you will need to do a couple of additional steps to get the correct kubeconfig for a workload cluster created with the Docker provider:
# Point the kubeconfig to the exposed port of the load balancer, rather than the inaccessible container IP.
sed -i -e "s/server:.*/server: https:\/\/$(docker port capi-docker-lb 6443/tcp | sed "s/0.0.0.0/127.0.0.1/")/g" ./capi-docker.kubeconfig
# Ignore the CA, because it is not signed for 127.0.0.1
sed -i -e "s/certificate-authority-data:.*/insecure-skip-tls-verify: true/g" ./capi-docker.kubeconfig
# wait for some time and check the status of the cluster, it took ~10 min in my local to get the status Ready
gkailathuvalap@gkailathuva-a01:~/k8s/cluster-api$ kubectl --kubeconfig=capi-docker.kubeconfig get no
NAME STATUS ROLES AGE VERSION
capi-docker-control-plane-rz57g Ready control-plane,master 34m v1.22.0
gkailathuvalap@gkailathuva-a01:~/k8s/cluster-api$ kubectl get kubeadmcontrolplane
NAME CLUSTER INITIALIZED API SERVER AVAILABLE REPLICAS READY UPDATED UNAVAILABLE AGE VERSION
capi-docker-control-plane capi-docker true 1 1 1 39m v1.22.0
gkailathuvalap@gkailathuva-a01:~/k8s/cluster-api$
# Pls note that we have only one of the control plane node now, but we mentioned we need 3 master and 3 worker nodes
# Lets describe the cluster
gkailathuvalap@gkailathuva-a01:~/k8s/cluster-api$ clusterctl describe cluster capi-docker
NAME READY SEVERITY REASON SINCE MESSAGE
/capi-docker False Warning ScalingUp 72m Scaling up control plane to 3 replicas (actual 1)
├─ClusterInfrastructure - DockerCluster/capi-docker True 72m
├─ControlPlane - KubeadmControlPlane/capi-docker-control-plane False Warning ScalingUp 72m Scaling up control plane to 3 replicas (actual 1)
│ └─Machine/capi-docker-control-plane-rz57g True 34m
└─Workers
└─MachineDeployment/capi-docker-md-0 False Warning WaitingForAvailableMachines 72m Minimum availability requires 3 replicas, current 0 available
└─3 Machines... False Info WaitingForBootstrapData 7m21s See capi-docker-md-0-55f6dc45fb-9sv5t, capi-docker-md-0-55f6dc45fb-gn282, ...
gkailathuvalap@gkailathuva-a01:~/k8s/cluster-api$
gkailathuvalap@gkailathuva-a01:~/k8s/cluster-api$ kubectl describe machine capi-docker-md-0-55f6dc45fb-9sv5t
Name: capi-docker-md-0-55f6dc45fb-9sv5t
Namespace: default
Labels: cluster.x-k8s.io/cluster-name=capi-docker
cluster.x-k8s.io/deployment-name=capi-docker-md-0
machine-template-hash=1192870196
Annotations: <none>
API Version: cluster.x-k8s.io/v1beta1
Kind: Machine
Metadata:
Creation Timestamp: 2021-11-19T01:53:51Z
Finalizers:
machine.cluster.x-k8s.io
Generate Name: capi-docker-md-0-55f6dc45fb-
Generation: 1
Managed Fields:
API Version: cluster.x-k8s.io/v1beta1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.:
v:"machine.cluster.x-k8s.io":
f:generateName:
f:labels:
.:
f:cluster.x-k8s.io/cluster-name:
f:cluster.x-k8s.io/deployment-name:
f:machine-template-hash:
f:ownerReferences:
.:
k:{"uid":"d5c4bea3-40ce-40da-9213-a88dab20c979"}:
.:
f:apiVersion:
f:blockOwnerDeletion:
f:controller:
f:kind:
f:name:
f:uid:
f:spec:
.:
f:bootstrap:
.:
f:configRef:
.:
f:apiVersion:
f:kind:
f:name:
f:namespace:
f:uid:
f:clusterName:
f:infrastructureRef:
.:
f:apiVersion:
f:kind:
f:name:
f:namespace:
f:uid:
f:version:
f:status:
.:
f:bootstrapReady:
f:conditions:
f:lastUpdated:
f:observedGeneration:
f:phase:
Manager: manager
Operation: Update
Time: 2021-11-19T02:31:50Z
Owner References:
API Version: cluster.x-k8s.io/v1beta1
Block Owner Deletion: true
Controller: true
Kind: MachineSet
Name: capi-docker-md-0-55f6dc45fb
UID: d5c4bea3-40ce-40da-9213-a88dab20c979
Resource Version: 10846
UID: 32f335ec-df59-4115-8107-07a177a60882
Spec:
Bootstrap:
Config Ref:
API Version: bootstrap.cluster.x-k8s.io/v1beta1
Kind: KubeadmConfig
Name: capi-docker-md-0-sx69k
Namespace: default
UID: 7c293136-6496-4cd3-b11c-2aca0346c737
Cluster Name: capi-docker
Infrastructure Ref:
API Version: infrastructure.cluster.x-k8s.io/v1beta1
Kind: DockerMachine
Name: capi-docker-md-0-qbdgc
Namespace: default
UID: 4ccff18b-2d5c-460b-9386-77a7c84f9e35
Version: v1.22.0
Status:
Bootstrap Ready: true
Conditions:
Last Transition Time: 2021-11-19T02:59:26Z
Message: 1 of 2 completed
Reason: WaitingForBootstrapData
Severity: Info
Status: False
Type: Ready
Last Transition Time: 2021-11-19T02:31:50Z
Status: True
Type: BootstrapReady
Last Transition Time: 2021-11-19T02:59:26Z
Message: 0 of 2 completed
Reason: WaitingForBootstrapData
Severity: Info
Status: False
Type: InfrastructureReady
Last Transition Time: 2021-11-19T01:53:51Z
Reason: WaitingForNodeRef
Severity: Info
Status: False
Type: NodeHealthy
Last Updated: 2021-11-19T02:31:50Z
Observed Generation: 1
Phase: Provisioning
Events: <none>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment