Pedantic ADHD Guide to Kubenretes Provisioning
This is my guide for how to get Kubernetes provisioned using Azure, Google Cloud, or AWS using CLI tools. This will require setting up an account with local CLI tools. The goal of this is to quickly set up a disposable test clusters in minimalist way.
Some tools used here:
- General
- Cloud CLI tools
- Azure CLI (
az
) - tool used to provision Azure cloud resources - AWS CLI (
aws
) - tool used to provision AWS cloud resources - Google Cloud SDK (
gcloud
) - tool used to provision Google cloud resources
- Azure CLI (
- Kubernetes Tools
- Kubectl Client (
kubectl
) - client tool to interact with Kubernetes cluster - Helm (
helm
) - tool to deploy Kubernetes packages
- Kubectl Client (
- Cloud Provisioning
- Managing
kubectl
versions withasdf
: https://medium.com/@joachim8675309/managing-the-many-k8s-versions-2f9216efb0bb - Managing
terraform
versions withtfenv
: https://joachim8675309.medium.com/install-terraform-with-tfenv-893433f348dd
Here's a quick and easy way to bring up disposable Kubernetes clusters on Azure, Google Cloud, or AWS using the CLI tools from these cloud providers. Afterward, you can use kubectl
, helm
, or terraform
to deploy applications on Kubernetes.
cat <<-'EOF' > .envrc
export AZ_AKS_CLUSTER_NAME="my-aks-cluster"
export AZ_RESOURCE_GROUP="my-aks-cluster-rg"
export AZ_LOCATION="eastus"
export KUBECONFIG="$HOME/.kube/$AZ_LOCATION.$AZ_AKS_CLUSTER_NAME.yaml"
EOF
direnv allow
create_k8s_cluster()
{
local AZ_AKS_CLUSTER_NAME=${1:-"my-aks-cluster"}
local AZ_RESOURCE_GROUP=${2:-"my-aks-cluster-rg"}
local AZ_LOCATION=${3:-"eastus"}
# provision Kubernetes
# NOTE: default quota is 2 VMS per location
az aks create \
--resource-group ${AZ_RESOURCE_GROUP} \
--name ${AZ_AKS_CLUSTER_NAME} \
--node-count 2 \
--zones 1 2
# configure KUBECONFIG
az aks get-credentials \
--resource-group ${AZ_RESOURCE_GROUP} \
--name ${AZ_AKS_CLUSTER_NAME} \
--file ${KUBECONFIG}
}
rm -rf $KUBECONFIG # delete any existing KUBECONFIG
# provision kubernetes cluster
az group create --name=$AZ_RESOURCE_GROUP --location=$AZ_LOCATION
create_k8s_cluster $AZ_AKS_CLUSTER_NAME $AZ_RESOURCE_GROUP $AZ_LOCATION
# setup client
SVER=$(kubectl version --short | grep -e Server | grep -oP '(\d+\.){2}\d+')
asdf list kubectl | grep -qo $SVER || asdf install kubectl $SVER
asdf global kubectl $SVER
az aks delete \
--resource-group ${AZ_RESOURCE_GROUP} \
--name ${AZ_AKS_CLUSTER_NAME}
az group delete \
--name=$AZ_RESOURCE_GROUP \
--location=$AZ_LOCATION
rm -rf $KUBECONFIG
The default identity principal, fancy term for account, has insecure default privileges. This shows how to setup a basic cluster that uses least privilege. In a nutshell, you don't want to create a cluster whose default account allows itself to grant new privileges. This is dangerous, thus the minimal setup requires a few extra steps.
cat <<-'EOF' > .envrc
export CLOUDSDK_CORE_PROJECT="<your-project-name-goes-here>" # set default project
export GKE_PROJECT_ID=$CLOUDSDK_CORE_PROJECT
export gke_NAME="my-gke-cluster"
export GKE_REGION="us-central1"
export KUBECONFIG="$HOME/.kube/$GKE_REGION.$gke_NAME.yaml"
EOF
direnv allow
create_least_priv_gsa()
{
local gke_NAME=${1:-"my-gke-cluster"}
local GKE_REGION=${2:-"us-central1"}
local GKE_SA_NAME="$gke_NAME-sa"
local GKE_SA_EMAIL="$GKE_SA_NAME@${GKE_PROJECT_ID}.iam.gserviceaccount.com"
ROLES=(
roles/logging.logWriter
roles/monitoring.metricWriter
roles/monitoring.viewer
roles/stackdriver.resourceMetadata.writer
)
gcloud iam service-accounts create $GKE_SA_NAME \
--display-name $GKE_SA_NAME --project $GKE_PROJECT_ID
# assign google service account to roles in GKE project
for ROLE in ${ROLES[*]}; do
gcloud projects add-iam-policy-binding $GKE_PROJECT_ID \
--member "serviceAccount:$GKE_SA_EMAIL" \
--role $ROLE
done
}
create_k8s_cluster()
{
local gke_NAME=${1:-"my-gke-cluster"}
local GKE_REGION=${2:-"us-central1"}
local GKE_SA_NAME="$gke_NAME-sa"
local GKE_SA_EMAIL="$GKE_SA_NAME@${GKE_PROJECT_ID}.iam.gserviceaccount.com"
create_least_priv_gsa $gke_NAME $GKE_REGION
gcloud container clusters create $gke_NAME \
--project $GKE_PROJECT_ID --region $GKE_REGION --num-nodes 1 \
--service-account "$GKE_SA_EMAIL" \
--workload-pool "$GKE_PROJECT_ID.svc.id.goog"
}
rm -rf $KUBECONFIG # delete any existing KUBECONFIG
create_k8s_cluster $gke_NAME $GKE_REGION
# setup client
SVER=$(kubectl version --short | grep -e Server | grep -oP '(\d+\.){2}\d+')
asdf list kubectl | grep -qo $SVER || asdf install kubectl $SVER
asdf global kubectl $SVER
###
# IMPORTANT: Delete persistent storage (PVC) before deleting cluster to avoid
# leftover cloud resources.
#####
gcloud container clusters delete $gke_NAME \
--project $GKE_PROJECT_ID --region $GKE_REGION
ROLES=(
roles/logging.logWriter
roles/monitoring.metricWriter
roles/monitoring.viewer
roles/stackdriver.resourceMetadata.writer
)
GKE_SA_NAME="$gke_NAME-sa"
GKE_SA_EMAIL="$GKE_SA_NAME@${GKE_PROJECT_ID}.iam.gserviceaccount.com"
for ROLE in ${ROLES[*]}; do
gcloud projects remove-iam-policy-binding$GKE_PROJECT_ID \
--member "serviceAccount:$GKE_SA_EMAIL" \
--role $ROLE
done
gcloud iam service-accounts delete $GKE_SA_NAME --project $GKE_PROJECT_ID
EKS is fairly straight forward to setup using eksctl
tool. This is a CLI written in go-lang that standups all the necessary components using Cloud Formation. This will setup both the networking components with VPC and the Kubernetes cluster itself.
cat <<-'EOF' > .envrc
export AWS_PROFILE="<your-profile-name-goes-here" # set profile to utilize
export EKS_CLUSTER_NAME="my-eks-cluster"
export EKS_REGION="us-east-2"
export EKS_VERSION="1.22"
export KUBECONFIG=$HOME/.kube/$EKS_REGION.$EKS_CLUSTER_NAME.yaml
EOF
direnv allow
create_k8s_cluster()
{
local EKS_CLUSTER_NAME=${1:-"my-eks-cluster"}
local EKS_REGION=${2:-"us-west-2"}
local EKS_VERSION=${3:-"1.22"}
eksctl create cluster \
--region $EKS_REGION \
--name $EKS_CLUSTER_NAME \
--version $EKS_VERSION
}
create_k8s_cluster $EKS_CLUSTER_NAME $EKS_REGION $EKS_VERSION
# setup client
SVER=$(kubectl version --short | grep -e Server | grep -oP '(\d+\.){2}\d+')
asdf list kubectl | grep -qo $SVER || asdf install kubectl $SVER
asdf global kubectl $SVER
Starting with Kubernetes 1.23
and greater, the default storage class no longer works. Thus you cannot use persistent storage any longer until you install a new storage class driver. This obviously makes a default cluster rather complex to setup.
cat <<-'EOF' > .envrc
export AWS_PROFILE="<your-profile-name-goes-here" # set profile to utilize
export EKS_CLUSTER_NAME="my-eks-cluster"
export EKS_REGION="us-east-2"
export EKS_VERSION="1.25"
export KUBECONFIG=$HOME/.kube/$EKS_REGION.$EKS_CLUSTER_NAME.yaml
EOF
direnv allow
create_k8s_cluster()
{
local EKS_CLUSTER_NAME=${1:-"my-eks-cluster"}
local EKS_REGION=${2:-"us-east-2"}
local EKS_VERSION=${3:-"1.25"}
eksctl create cluster \
--region $EKS_REGION \
--name $EKS_CLUSTER_NAME \
--version $EKS_VERSION
## extra requirements
eksctl utils associate-iam-oidc-provider \
--cluster $EKS_CLUSTER_NAME \
--region=$EKS_REGION \
--approve
eksctl create iamserviceaccount \
--name ebs-csi-controller-sa \
--namespace kube-system \
--cluster $EKS_CLUSTER_NAME \
--attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
--approve \
--role-only \
--role-name AmazonEKS_EBS_CSI_DriverRole
cat <<-EOF > policy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:CreateSnapshot",
"ec2:AttachVolume",
"ec2:DetachVolume",
"ec2:ModifyVolume",
"ec2:DescribeAvailabilityZones",
"ec2:DescribeInstances",
"ec2:DescribeSnapshots",
"ec2:DescribeTags",
"ec2:DescribeVolumes",
"ec2:DescribeVolumesModifications"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": [
"arn:aws:ec2:*:*:volume/*",
"arn:aws:ec2:*:*:snapshot/*"
],
"Condition": {
"StringEquals": {
"ec2:CreateAction": [
"CreateVolume",
"CreateSnapshot"
]
}
}
},
{
"Effect": "Allow",
"Action": [
"ec2:DeleteTags"
],
"Resource": [
"arn:aws:ec2:*:*:volume/*",
"arn:aws:ec2:*:*:snapshot/*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateVolume"
],
"Resource": "*",
"Condition": {
"StringLike": {
"aws:RequestTag/ebs.csi.aws.com/cluster": "true"
}
}
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateVolume"
],
"Resource": "*",
"Condition": {
"StringLike": {
"aws:RequestTag/CSIVolumeName": "*"
}
}
},
{
"Effect": "Allow",
"Action": [
"ec2:DeleteVolume"
],
"Resource": "*",
"Condition": {
"StringLike": {
"ec2:ResourceTag/ebs.csi.aws.com/cluster": "true"
}
}
},
{
"Effect": "Allow",
"Action": [
"ec2:DeleteVolume"
],
"Resource": "*",
"Condition": {
"StringLike": {
"ec2:ResourceTag/CSIVolumeName": "*"
}
}
},
{
"Effect": "Allow",
"Action": [
"ec2:DeleteVolume"
],
"Resource": "*",
"Condition": {
"StringLike": {
"ec2:ResourceTag/kubernetes.io/created-for/pvc/name": "*"
}
}
},
{
"Effect": "Allow",
"Action": [
"ec2:DeleteSnapshot"
],
"Resource": "*",
"Condition": {
"StringLike": {
"ec2:ResourceTag/CSIVolumeSnapshotName": "*"
}
}
},
{
"Effect": "Allow",
"Action": [
"ec2:DeleteSnapshot"
],
"Resource": "*",
"Condition": {
"StringLike": {
"ec2:ResourceTag/ebs.csi.aws.com/cluster": "true"
}
}
}
]
}
EOF
aws iam create-policy \
--policy-name AmazonEKS_EBS_CSI_DriverRolePolicy \
--policy-document file://policy.json
aws iam attach-role-policy \
--policy-arn arn:aws:iam::$ACCOUNT_ID:policy/AmazonEKS_EBS_CSI_DriverRolePolicy \
--role-name AmazonEKS_EBS_CSI_DriverRole
eksctl create addon \
--name aws-ebs-csi-driver \
--cluster $EKS_CLUSTER_NAME \
--region=$EKS_REGION \
--service-account-role-arn arn:aws:iam::$ACCOUNT_ID:role/AmazonEKS_EBS_CSI_DriverRole --force
}
create_k8s_cluster $EKS_CLUSTER_NAME $EKS_REGION $EKS_VERSION
# check cluster status
eksctl utils describe-stacks \
--region=$EKS_REGION \
--cluster=$EKS_CLUSTER_NAME
# check ebs driver status
eksctl get addon \
--name aws-ebs-csi-driver \
--region=$EKS_REGION \
--cluster $EKS_CLUSTER_NAME
# setup client
SVER=$(kubectl version --short | grep -e Server | grep -oP '(\d+\.){2}\d+')
asdf list kubectl | grep -qo $SVER || asdf install kubectl $SVER
asdf global kubectl $SVER
# setup new storage class
cat <<EOF > sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-sc
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
EOF
# install new storage class using functional driver
kubectl apply -f sc.yaml
# test new storage class before updating default
kubectl patch storageclass gp2 \
-p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
kubectl patch storageclass ebs-sc \
-p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
###
# IMPORTANT: It is important to delete K8S objects that indirectly prpvision cloud resources, e.g.
# * Services of type LoadBlaancer that created NLB or ELB (aka classic ELB)
# * Ingress resources that created ALB (aka ELBv2)
# * Perisistent sotrage (PVC) that created and mounted external EBS volumes
#####
eksctl delete cluster \
--region $EKS_REGION \
--name $EKS_CLUSTER_NAME
You can use the tool kubectl
to deploy Kubernetes objects or deploy fully baked Kubernetes applications with helm
. Terraform can deploy Kubernetes objects or Helm charts. The advantage of using Terraform is that you can add automation on top of Kubernetes, similar to Helm charts, or add automation of the Helm chart config values. This could be useful, for things like initialize a object storage (S3, GCS, Azure Blob) or other cloud resource that is use in conjunction with Kubernetes objects.
You need to initalize a Kubernetes (or Helm) provider with credentials needed to access the Kubernetes cluster. You can get the credentials using your cloud credentials from the cloud provider.
provider "azurerm" {
features {}
}
data "azurerm_kubernetes_cluster" "aks" {
name = "my-aks-cluster"
resource_group_name = "my-aks-rg"
}
provider "kubernetes" {
host = data.azurerm_kubernetes_cluster.aks.kube_config.0.host
client_certificate = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate)
client_key = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.client_key)
cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)
}
provider "helm" {
kubernetes {
host = data.azurerm_kubernetes_cluster.aks.kube_config.0.host
client_certificate = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate)
client_key = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.client_key)
cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)
}
}
provider "google" {
project = "my-gcp-project"
region = "us-central1"
}
data "google_container_cluster" "gke" {
name = "my-gke-cluster"
location = "us-central1"
}
data "google_client_config" "default" {}
provider "kubernetes" {
host = data.google_container_cluster.gke.endpoint
cluster_ca_certificate = base64decode(data.google_container_cluster.gke.master_auth.0.cluster_ca_certificate)
token = data.google_client_config.default.access_token
}
provider "helm" {
kubernetes {
host = data.google_container_cluster.gke.endpoint
cluster_ca_certificate = base64decode(data.google_container_cluster.gke.master_auth.0.cluster_ca_certificate)
token = data.google_client_config.default.access_token
}
}
provider "aws" {
region = "us-west-2"
}
data "aws_eks_cluster" "eks" {
name = "my-eks-cluster"
}
provider "kubernetes" {
host = data.aws_eks_cluster.eks.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.eks.certificate_authority.0.data)
token = data.aws_eks_cluster.eks.token
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
args = [
"eks",
"get-token",
"--cluster-name",
data.aws_eks_cluster.cluster.name
]
}
}
provider "helm" {
kubernetes {
host = data.aws_eks_cluster.eks.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.eks.certificate_authority.0.data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
args = [
"eks",
"get-token",
"--cluster-name",
data.aws_eks_cluster.cluster.name
]
}
}
}
Once the helm provider is initalize, you can install helm charts.
# example kubernetes resource
resource "kubernetes_namespace" "nginx-ingress" {
metadata {
name = "ingress-nginx"
}
}
# example helm resource
resource "helm_release" "nginx-ingress" {
name = "nginx-ingress"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
namespace = "ingress-nginx"
values = [
{
"controller" = {
"replicaCount" = 2
}
}
]
}