Skip to content

Instantly share code, notes, and snippets.

@feczo
Last active March 30, 2022 14:22
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save feczo/13a36c32b3307720b022963f657d0f24 to your computer and use it in GitHub Desktop.
Save feczo/13a36c32b3307720b022963f657d0f24 to your computer and use it in GitHub Desktop.

Preface

Related Resources:

Why not using terraform-provider-kubernetes

This conversion only works for a single file to use it in the terraform-provider-kubernetes provider

echo 'yamldecode(file("test.yaml"))' | terraform console

hashicorp/terraform#29729

One can overcome this multi kind definition yaml conversion with slicer:

curl -sL https://github.com/patrickdappollonio/kubectl-slice/releases/download/v1.2.1/kubectl-slice_1.2.1_linux_x86_64.tar.gz | tar -xvzf -;
rm -rf slices hcl;
./kubectl-slice -f document.yaml -o slices 2>&1 | grep  -oP "Wrote \K.+yaml" | while read yamlfile; do echo 'yamldecode(file("'$yamlfile'"))' | terraform console >>hcl; done;
cat hcl

Even after this though the manifest resource only takes one resource description, array does not work, so its a pain in the bumm to convert these without further coding to encompass each of these objects into a pseudo object of

resource "kubernetes_manifest" "crd-custom-name-for-each" {
  provider = kubernetes

  manifest = {$HCL-OBJECT_HERE}
}

also this issue https://medium.com/@danieljimgarcia/dont-use-the-terraform-kubernetes-manifest-resource-6c7ff4fe629a so getting rid of the whole provider just use helm maybe or custom flow at the end as this is too much effort for not much benefit beyond having a uniform config language.

Zero Trust

Start

Linux housekeeping

sudo su -
apt-get update
apt-get dist-upgrade
apt autoremove
apt-get install apt-transport-https ca-certificates gnupg terraform kubectl google-cloud-sdk

gcloud login / init

gcloud auth application-default login --no-browser
  1. https://github.com/Neutrollized/free-tier-gke
git clone https://github.com/Neutrollized/free-tier-gke.git

GCP Project Config

below commands do

gcloud services enable --async conmpute.googleapis.com
gcloud services enable --async container.googleapis.com
gcloud services enable --async cloudresourcemanager.googleapis.com
gcloud services enable --async iam.googleapis.com
gcloud services enable --async container.googleapis.com
gcloud projects add-iam-policy-binding immerspring --member='serviceAccount:351847295691-compute@developer.gserviceaccount.com' --role='roles/resourcemanager.projectIamAdmin'

These avoid the below

Error: Request Set IAM Binding for role "roles/stackdriver.resourceMetadata.writer" on "project \"immerspring\"" returned error: Batch request and retried single request "Set IAM Binding for role "roles/stackdriver.resourceMetadata.writer" on "project \"immerspring\""" both failed. Final error: Error applying IAM policy for project "immerspring": Error setting IAM policy for project "immerspring": googleapi: Error 403: Policy update access denied., forbidden

Error: Error creating service account: googleapi: Error 403: Identity and Access Management (IAM) API has not been used in project 351847295691 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/iam.googleapis.com/overview?project=351847295691 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.

Error: Request Set IAM Binding for role "roles/stackdriver.resourceMetadata.writer" on "project \"immerspring\"" returned error: Batch request and retried single request "Set IAM Binding for role "roles/stackdriver.resourceMetadata.writer" on "project \"immerspring\""" both failed. Final error: Error retrieving IAM policy for project "immerspring": googleapi: Error 403: Cloud Resource Manager API has not been used in project 351847295691 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/cloudresourcemanager.googleapis.com/overview?project=351847295691 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.

TF vars

vi variables.tf
# see diff below
❯ git commit -a
[master 45f9b8f] Hedgehog v1
 1 file changed, 14 insertions(+), 9 deletions(-)
~/terra/free-tier-gke master ⇡ 10s
diff --git a/variables.tf b/variables.tf
index 7e2050e..e76bbce 100644
--- a/variables.tf
+++ b/variables.tf
@@ -1,16 +1,20 @@
 #-----------------------
 # provider variables
 #-----------------------
-variable "project_id" {}
+variable "project_id" {
+  default = "immerspring"
+}

-variable "credentials_file_path" {}
+variable "credentials_file_path" {
+  default = "/home/sub/.config/immerspring-7d908732db98.json"
+}

 variable "region" {
-  default = "us-central1"
+  default = "australia-southeast1"
 }

 variable "zone" {
-  default = "us-central1-c"
+  default = "australia-southeast1-a"
 }

 #------------------------------------------------
@@ -69,7 +73,9 @@ variable "iam_roles_list" {
 # GKE Cluster
 #-----------------------------

-variable "gke_cluster_name" {}
+variable "gke_cluster_name" {
+  default = "hedgehog"
+}

 variable "regional" {
   description = "Is this cluster regional or zonal? Regional clusters aren't covered by Google's Always Free tier."
@@ -113,7 +119,7 @@ variable "master_authorized_network_cidr" {

 variable "master_ipv4_cidr_block" {
   description = "CIDR of the master network.  Range must not overlap with any other ranges in use within the cluster's network."
-  default     = ""
+  default     = "172.20.1.0/28"
 }

 variable "network_policy_enabled" {
@@ -156,8 +162,7 @@ variable "confidential_nodes_enabled" {
 #-----------------------------

 variable "machine_type" {
-  default = "n2d-standard-2"
-  #  default = "e2-small"
+  default = "e2-small"
 }

 variable "preemptible" {

To avoid this

Error: Error waiting for creating GKE cluster: Invalid master authorized networks: network "0.0.0.0/0" is not a reserved network, which is required for private endpoints.\

  1. enable_private_endpoint -> false
 variable "enable_private_endpoint" {
   description = "When true public access to cluster (master) endpoint is disabled.  When false, it can be accessed both publicly and privately."
-  default     = "true"
+  default     = "false"
 }
  1. alternatively master_authorized_network_cidr -> 192.168.100.0/24

TF init

~/terra/free-tier-gke master*
❯ terraform init

Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/google versions matching "~> 4.0"...
- Finding hashicorp/google-beta versions matching "~> 4.0"...
- Installing hashicorp/google v4.15.0...
- Installed hashicorp/google v4.15.0 (signed by HashiCorp)
- Installing hashicorp/google-beta v4.15.0...
- Installed hashicorp/google-beta v4.15.0 (signed by HashiCorp)
[..]
Terraform has been successfully initialized!
[..]

TF Apply

terraform apply
> google_project_iam_binding.gke_sa_iam_binding[0]: Creating...
 
> google_container_cluster.primary: Creating...

> google_container_cluster.primary: Creation complete after 7m56s [id=projects/immerspring/locations/australia-southeast1-a/clusters/hedgehog]

> google_container_node_pool.primary_preemptible_nodes: Still creating... [2m20s elapsed]

> google_container_node_pool.primary_preemptible_nodes: Creation complete after 6m26s [id=projects/immerspring/locations/australia-southeast1-a/clusters/hedgehog/nodePools/preempt-pool]

Apply complete! Resources: 12 added, 0 changed, 0 destroyed.

Outputs:

connect_to_zonal_cluster = "gcloud container clusters get-credentials hedgehog --zone australia-southeast1-a --project immerspring"

Scale Up

gcloud container clusters resize hedgehog --node-pool preempt-pool --num-nodes 3 --zone australia-southeast1-a

Deploy the Dashboard

Somehow nodepool was not created by terraform, backfill:

❯ gcloud container node-pools create preempt-pool \
… ❯   --cluster hedgehog \
… ❯   --zone australia-southeast1-a \
… ❯   --enable-autoupgrade \
… ❯   --preemptible \
… ❯   --num-nodes 1 --machine-type e2-medium \
… ❯   --enable-autoscaling --min-nodes=1 --max-nodes=4
Creating node pool preempt-pool...done.
Created [https://container.googleapis.com/v1/projects/immerspring/zones/australia-southeast1-a/clusters/hedgehog/nodePools/preempt-pool].
NAME          MACHINE_TYPE  DISK_SIZE_GB  NODE_VERSION
preempt-pool  e2-medium     100           1.21.9-gke.1002

v1.0.7 tag failed with ImagePullBackOff, N-1 version or tag updates on images did not help, tested with an autopilot cluster which worked, so carried over some net configs such as dnscache disable private nodes + enable_intranode_visibility https://learnk8s.io/a/a-visual-guide-on-troubleshooting-kubernetes-deployments/troubleshooting-kubernetes.en_en.v2.pdf

apply dash-user.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --namespace kubernetes-dashboard

Alternative

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml

Finally it runs

❯ kubectl get pods -o wide  --namespace=kubernetes-dashboard
NAME                                        READY   STATUS    RESTARTS   AGE   IP           NODE                                      NOMINATED NODE   READINESS GATES
dashboard-metrics-scraper-c45b7869d-fs5s4   1/1     Running   0          34m   10.0.0.125   gke-hedgehog-preempt-pool-d7617843-7gd1   <none>           <none>
kubernetes-dashboard-764b4dd7-zrnzx         1/1     Running   0          34m   10.0.0.127   gke-hedgehog-preempt-pool-d7617843-7gd1   <none>           <none>

Proxy up

kubectl proxy&

Login to dash

https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md

alias kdash-token='kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"'

then get it with (need this every so often hence the alias):

kdashtoken

Visit http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:443/proxy/

This is works because https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster-services/#manually-constructing-apiserver-proxy-urls

Prometheus check

kubectl port-forward pod/$(kubectl get pods --selector app=prometheus  --namespace=istio-system  -o jsonpath="{.items[0].metadata.name}") -n istio-system 9090 &

http://localhost:9090/graph

Check a service like Kiali

kubectl port-forward svc/kiali 20001:20001 -n istio-system

Quark Java REST API container build

Quickstart from https://github.com/graalvm/mandrel/releases

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment