Skip to content

Instantly share code, notes, and snippets.

@sonoroot
Last active December 9, 2021 11:17
Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save sonoroot/4e4ac10ea776b49696936dd4701949a3 to your computer and use it in GitHub Desktop.
Save sonoroot/4e4ac10ea776b49696936dd4701949a3 to your computer and use it in GitHub Desktop.
kubernetes notes

Kubernetes Notes

Busybox Debug

kubectl run busybox --image=busybox:1.28 --rm -it --restart=Never -- sh
kubectl run test --image=alpine:latest --rm -it --restart=Never -- sh

Remove (almost) everything from a namespace

kubectl delete daemonsets,replicasets,statefulsets,services,deployments,pods,rc,pvc,cm,secrets,ing --all

Remove (almost) everything

kubectl get ns | awk {'print $1'} | xargs -t -I {} kubectl -n {} delete daemonsets,replicasets,statefulsets,services,deployments,pods,rc,pvc,cm,secrets,ing --all

Basic Pod with Container commands and args

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  namespace: web
spec:
  containers:
  - name: nginx
    image: nginx
    command: ["nginx"]
    args: ["-g", "daemon off;", "-q"]
    ports:
    - containerPort: 80

Basic Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

Labels and selectors

apiVersion: v1
kind: Pod
metadata:
  name: my-production-label-pod
  labels:
    app: my-app
    environment: production
spec:
  containers:
  - name: nginx
    image: nginx
apiVersion: v1
kind: Pod
metadata:
  name: my-development-label-pod
  labels:
    app: my-app
    environment: development
spec:
  containers:
  - name: nginx
    image: nginx
kubectl get pods -l app=my-app

kubectl get pods -l environment=production

kubectl get pods -l environment=development

kubectl get pods -l environment!=production

kubectl get pods -l 'environment in (development,production)'

kubectl get pods -l app=my-app,environment=production

Helm/Tiller commands and examples

# get current values
helm get values prometheus-operator
# upgrade/install an existing deployment
helm upgrade -f ingress-controller/values.yml nginx-ingress stable/nginx-ingress
helm install nfs-client-provisioner --set nfs.server=10.1.2.3 --set nfs.path=/NFSshare01 --set tolerations[0].key=role --set tolerations[0].value=infra --namespace=nfs-provisioner --set replicaCount=3 --set nodeSelector.role=infra  stable/nfs-client-provisioner

Tiller GOD mode

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}' 

Get the LoadBalancer IP

export EXT_IP=$(kubectl -n ingress-nginx get svc ingress-nginx -o=jsonpath='{.status.loadBalancer.ingress[0].ip}')

Letsencrypt

Installation

$ kubectl apply --validate=false\
    -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.12/deploy/manifests/00-crds.yaml
## Add the Jetstack Helm repository
$ helm repo add jetstack https://charts.jetstack.io


## Install the cert-manager helm chart
$ helm install --name my-release --namespace cert-manager jetstack/cert-manager

Basic Ingress

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: webserver
  annotations:
    kubernetes.io/ingress.class: "nginx"
    cert-manager.io/issuer: "letsencrypt-staging" # "Issuer" is local to the namespace
    #nginx.ingress.kubernetes.io/whitelist-source-range: "145.100.1.0/24,145.100.19.0/24,83.128.247.24"
    #cert-manager.io/cluster-issuer: "letsencrypt-staging" # "cluster-issure" is cluster wide
    #nginx.ingress.kubernetes.io/rewrite-target: "/" # what is this? rewrite.bar.com/something rewrites to rewrite.bar.com/ # more here: https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/rewrite
spec:
  tls:
  - hosts:
    - something.example.com
    secretName: tls-secret
  rules:
  - host: something.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: webserver
          servicePort: 8080

TCP ingress (nginx configmap)

apiVersion: v1
kind: ConfigMap
metadata:
  name: tcp-services
  namespace: ingress-nginx
data:
  9997: "default/consul-test:8500"

Scheduler Affinity based on node labels with weight

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: pref
spec:
  replicas: 5
  template:
    metadata:
      labels:
        app: pref
    spec:
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 80
            preference: # kubectl label node node.bla availability-zone=zone1 # 80% will land here
              matchExpressions:
              - key: availability-zone
                operator: In
                values:
                - zone1
          - weight: 20 # kubectl label node node.bla share-type=dedicated # 20% will land here
            preference:
              matchExpressions:
              - key: share-type
                operator: In
                values:
                - dedicated
      containers:
      - args:
        - sleep
        - "99999"
        image: busybox
        name: main

Storage

Basic PVC and pod

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: task-pv-claim
spec:
  storageClassName: default
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi
---

kind: Pod
apiVersion: v1
metadata:
  name: task-pv-pod
spec:
  volumes:
    - name: task-pv-storage
      persistentVolumeClaim:
       claimName: task-pv-claim
  containers:
    - name: task-pv-container
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: task-pv-storage

Local Storage (storageclass)

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

Set a default storage class

kubectl patch storageclass <your-class-name> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Taints and Tolerations

kubectl taint node node1 node-type=prod:NoSchedule

This Deployment has a Toleration that will allow it to run on node1

apiVersion: apps/v1
kind: Deployment
metadata:
 name: prod
spec:
 replicas: 1
 selector:
   matchLabels:
     app: prod
 template:
   metadata:
     labels:
       app: prod
   spec:
     containers:
     - args:
       - sleep
       - "3600"
       image: busybox
       name: main
     tolerations:
     - key: node-type
       operator: Equal
       value: prod
       effect: NoSchedule
kubectl taint node node2 node-type=dev:NoSchedule

This Deployment has a Toleration that will allow it to run on node2

apiVersion: apps/v1
kind: Deployment
metadata:
 name: dev
spec:
 replicas: 1
 selector:
   matchLabels:
     app: dev
 template:
   metadata:
     labels:
       app: dev
   spec:
     containers:
     - args:
       - sleep
       - "3600"
       image: busybox
       name: main
     tolerations:
     - key: node-type
       operator: Equal
       value: dev
       effect: NoSchedule

Rolling Upgrades and Rollback

#Create a deployment with a record (for rollbacks):
kubectl create -f test-deployment.yaml --record

#Check the status of the rollout:
kubectl rollout status deployments test

#View the ReplicaSets in your cluster:
kubectl get replicasets

#Scale up your deployment by adding more replicas:
kubectl scale deployment test --replicas=5

#Expose the deployment and provide it a service:
kubectl expose deployment test --port 80 --target-port 80 --type NodePort

#Set the minReadySeconds attribute to your deployment:
kubectl patch deployment test -p '{"spec": {"minReadySeconds": 10}}'

#Use kubectl apply to update a deployment:
kubectl apply -f test-deployment.yaml

#Use kubectl replace to replace an existing deployment:
kubectl replace -f test-deployment.yaml

#Run this curl look while the update happens:
while true; do curl http://10.x.x.x; done

#Perform the rolling update:
kubectl set image deployments/test app=idontexist/test:v2 --v 6

#Describe a certain ReplicaSet:
kubectl describe replicasets test-[hash]

#Apply the rolling update to version 3 (buggy):
kubectl set image deployment test app=idontexist/test:v3

#Undo the rollout and roll back to the previous version:
kubectl rollout undo deployments test

#Look at the rollout history:
kubectl rollout history deployment test

#Roll back to a certain revision:
kubectl rollout undo deployment test --to-revision=2

#Pause the rollout in the middle of a rolling update (canary release):
kubectl rollout pause deployment test

#Resume the rollout after the rolling update looks good:
kubectl rollout resume deployment test

# Redeploy in place Deployment or Daemonset:
kubectl -n rook-ceph patch daemonset rook-ceph-agent -p "{\"spec\": {\"template\": {\"metadata\": { \"labels\": {  \"redeploy\": \"$(date +%s)\"}}}}}"

Port-Forward

# pod
kubectl -n grafana port-forward rsc-grafana-84fb5c8b99-grdf2 3000
# svc
kubectl port-forward svc/redis-master 7000:6379 # (local:remote)

Configmaps (create and use)

kubectl create configmap appconfig --from-literal=key1=value1 --from-literal=key2=value2

Configmap as ENV

apiVersion: v1
kind: ConfigMap
metadata:
   name: my-config-map
data:
   myKey: myValue
   anotherKey: anotherValue

Configmap as Configuration File (typical use)

apiVersion: v1
kind: ConfigMap
metadata:
  name: your-config-map
data:
  config.cfg: |- # copy and paste your config file - check indentation
    value1=100000000
    value2=222
    some other raw text here

Configmap as file (mount volume)

apiVersion: v1
kind: Pod
metadata:
  name: my-configmap-volume-pod
spec:
  containers:
  - name: myapp-container
    image: myapp
    volumeMounts:
      - name: config-volume
        mountPath: /etc/settings # this will mount inside the pod /etc/settings/config.cfg
  volumes:
    - name: config-volume
      configMap:
        name: your-config-map
apiVersion: v1
kind: Pod
metadata:
  name: my-configmap-pod
spec:
  containers:
  - name: myapp-container
    image: busybox
    command: ['sh', '-c', "echo $(MY_VAR) && sleep 3600"]
    env:
    - name: MY_VAR
      valueFrom:
        configMapKeyRef:
          name: my-config-map
          key: myKey

Secrets

apiVersion: v1
kind: Secret
metadata:
  name: appsecret
stringData:
  cert: value
  key: value
apiVersion: v1
kind: Pod
metadata:
  name: secret-pod
spec:
  containers:
  - name: app-container
    image: busybox
    command: ['sh', '-c', "echo Hello, Kubernetes! && sleep 3600"]
    env:
    - name: MY_CERT
      valueFrom:
        secretKeyRef:
          name: appsecret
          key: cert

Jobs

apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  template:
    spec:
      containers:
      - name: pi
        image: perl
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never
  backoffLimit: 4 # how many times it will fail before giving up

CronJob

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            args:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure

Stateful Set (note: a service for a stateful set must be headless = no endpoints)

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  serviceName: "nginx"
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi

Using InitContainer to manage dependencies

apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  clusterIP: None
  ports:
  - name: mysql
    port: 3306
  selector:
    app: mysql
---
apiVersion: v1
kind: Service
metadata:
  name: wordpress
spec:
  ports:
  - name: wordpress
    port: 80
    targetPort: 80
  selector:
    app: wordpress
  type: NodePort
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  serviceName: mysql
  replicas: 1
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql
        image: mysql:5.7
        env:
        - name: MYSQL_ALLOW_EMPTY_PASSWORD
          value: "true"
        livenessProbe:
          exec:
            command: ["mysqladmin", "ping"]
          initialDelaySeconds: 30
          periodSeconds: 10
          timeoutSeconds: 5
        readinessProbe:
          exec:
            # Check we can execute queries over TCP (skip-networking is off).
            command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
          initialDelaySeconds: 5
          periodSeconds: 2
          timeoutSeconds: 1
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wordpress
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wordpress
  template:
    metadata:
      labels:
        app: wordpress
    spec:
      containers:
      - name: wordpress
        image: wordpress:4
        ports:
        - containerPort: 80
        env:
        - name: WORDPRESS_DB_HOST
          value: mysql
        - name: WORDPRESS_DB_PASSWORD
          value: ""
      initContainers:
      - name: init-mysql
        image: busybox
        command: ['sh', '-c', 'until nslookup mysql; do echo waiting for mysql; sleep 2; done;']

Users and access

# ADMIN user
kubectl create serviceaccount adminuser01 -n kube-system

kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --serviceaccount=kube-system:adminuser01

TOKEN=$(kubectl -n kube-system describe secrets "$(kubectl -n kube-system describe serviceaccount adminuser01 | grep -i Tokens | awk '{print $2}')" | grep token: | awk '{print $2}')

kubectl config set-credentials padawanadmin --token=$TOKEN

kubectl config set-context superadmin --cluster=do-lon1-k8s-1-13-6-do-0-lon1-xxxxxxx --user=padawanadmin

kubectl config use-context superadmin

#TODO:  LIMITED users

Roles

kubectl create ns web
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: web
  name: service-reader
rules:
- apiGroups: [""]
  verbs: ["get", "list"]
  resources: ["services"]
kubectl create rolebinding test --role=service-reader --serviceaccount=web:default -n web
kubectl proxy
curl localhost:8001/api/v1/namespaces/web/services

Network Policies

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all
spec:
  podSelector: {}
  policyTypes:
  - Ingress

Run a deployment to test the NetworkPolicy:

kubectl run nginx --image=nginx --replicas=2

# Create a service for the deployment:
kubectl expose deployment nginx --port=80

#Attempt to access the service by using a busybox interactive pod:
kubectl run busybox --rm -it --image=busybox /bin/sh
wget --spider --timeout=1 nginx

Pod to Pod communication (eg. webserver <-> database)

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: db-netpolicy
spec:
  podSelector:
    matchLabels:
      app: db
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: web
    ports:
    - port: 5432
# as per yaml above, label a pod to get the network policy
# and it will only accept traffic from pods labeled "web"

# for traffic coming on port 5432
kubectl label pods [pod_name] app=db

Network Policy based on namespace:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: ns-netpolicy
spec:
  podSelector:
    matchLabels:
      app: db
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          tenant: web # this is a label you set in the namespace metadata
    ports:
    - port: 5432

IP block and Egress

#IP block NetworkPolicy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: ipblock-netpolicy
spec:
  podSelector:
    matchLabels:
      app: db
  ingress:
  - from:
    - ipBlock: # allow connections from this IP block
        cidr: 192.168.1.0/24
#egress NetworkPolicy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: egress-netpol
spec:
  podSelector:
    matchLabels:
      app: web
  egress:
  - to:
    - podSelector:
        matchLabels:
          app: db
    ports:
    - port: 5432

Another example based on matchlabels

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: my-network-policy
spec:
  podSelector:
    matchLabels:
      app: secure-app #<= this policy will be applied to pods having the label "app: secure-app"
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          allow-access: "true" #<= traffic from pods with the label allow-access: "true"
    ports:
    - protocol: TCP #<= on TCP 80
      port: 80
  egress: # same for egress, traffic is allowed to port 80 for pods with label allow-access: "true"
  - to:
    - podSelector:
        matchLabels:
          allow-access: "true"
    ports:
    - protocol: TCP
      port: 80

Docker images from private repositories

kubectl create secret docker-registry auth-credentials-docker-bla --docker-server=https://docker-repo-here.lol --docker-username=cooluser --docker-password='gpnpwnpwnpwnp' --docker-email=user@example.com
kubectl patch sa default -p '{"imagePullSecrets": [{"name": "auth-credentials-docker-bla"}]}'
apiVersion: v1
kind: Pod
metadata:
  name: podz
  labels:
    app: busybox
spec:
  containers:
    - name: busybox
      image: docker-repo-here.lol/privateimage:latest
      command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
      imagePullPolicy: Always

Security Context

# Run an alpine container with default security:
kubectl run pod-with-defaults --image alpine --restart Never -- /bin/sleep 999999

# Check the ID on the container:
kubectl exec pod-with-defaults id

Container that runs as a user:

apiVersion: v1
kind: Pod
metadata:
  name: alpine-user-context
spec:
  containers:
  - name: main
    image: alpine
    command: ["/bin/sleep", "999999"]
    securityContext:
      runAsUser: 405
# Create a pod that runs the container as user:
kubectl apply -f alpine-user-context.yaml

#View the IDs of the new pod created with container user permission:
kubectl exec alpine-user-context id

SecurityContext and access to local (node) files

apiVersion: v1
kind: Pod
metadata:
  name: my-securitycontext-pod
spec:
  securityContext:
    runAsUser: 2000
    fsGroup: 3000
  containers:
  - name: myapp-container
    image: busybox
    command: ['sh', '-c', "cat /message/message.txt && sleep 3600"]
    volumeMounts:
    - name: message-volume
      mountPath: /message
  volumes:
  - name: message-volume
    hostPath:
      path: /etc/message

Pod that runs the container as non-root

apiVersion: v1
kind: Pod
metadata:
  name: alpine-nonroot
spec:
  containers:
  - name: main
    image: alpine
    command: ["/bin/sleep", "999999"]
    securityContext:
      runAsNonRoot: true

Create a pod that runs the container as non-root:

kubectl apply -f alpine-nonroot.yaml

#View more information about the pod error:
kubectl describe pod alpine-nonroot

Privileged container pod:

apiVersion: v1
kind: Pod
metadata:
  name: privileged-pod
spec:
  containers:
  - name: main
    image: alpine
    command: ["/bin/sleep", "999999"]
    securityContext:
      privileged: true

Create the privileged container pod:

kubectl apply -f privileged-pod.yaml

# View the devices on the default container:
kubectl exec -it pod-with-defaults ls /dev

#View the devices on the privileged pod container:
kubectl exec -it privileged-pod ls /dev

#Try to change the time on a default container pod:
kubectl exec -it pod-with-defaults -- date +%T -s "12:00:00"

Container that will allow you to change the time:

apiVersion: v1
kind: Pod
metadata:
  name: kernelchange-pod
spec:
  containers:
  - name: main
    image: alpine
    command: ["/bin/sleep", "999999"]
    securityContext:
      capabilities:
        add:
        - SYS_TIME

Create the pod that will allow you to change the container’s time:

kubectl run -f kernelchange-pod.yaml

#Change the time on a container:
kubectl exec -it kernelchange-pod -- date +%T -s "12:00:00"

#View the date on the container:
kubectl exec -it kernelchange-pod -- date

Container that removes capabilities:

apiVersion: v1
kind: Pod
metadata:
  name: remove-capabilities
spec:
  containers:
  - name: main
    image: alpine
    command: ["/bin/sleep", "999999"]
    securityContext:
      capabilities:
        drop:
        - CHOWN

Create a pod that’s container has capabilities removed:

kubectl apply -f remove-capabilities.yaml

#Try to change the ownership of a container with removed capability:
kubectl exec remove-capabilities chown guest /tmp

Pod container that can’t write to the local filesystem:

apiVersion: v1
kind: Pod
metadata:
  name: readonly-pod
spec:
  containers:
  - name: main
    image: alpine
    command: ["/bin/sleep", "999999"]
    securityContext:
      readOnlyRootFilesystem: true
    volumeMounts:
    - name: my-volume
      mountPath: /volume
      readOnly: false
  volumes:
  - name: my-volume
    emptyDir:

Create a pod that will not allow you to write to the local container filesystem:

kubectl apply -f readonly-pod.yaml

#Try to write to the container filesystem:
kubectl exec -it readonly-pod touch /new-file

#Create a file on the volume mounted to the container:
kubectl exec -it readonly-pod touch /volume/newfile

#View the file on the volume that’s mounted:
kubectl exec -it readonly-pod -- ls -la /volume/newfile

Pod that has different group permissions for different containers:

apiVersion: v1
kind: Pod
metadata:
  name: group-context
spec:
  securityContext:
    fsGroup: 555
    supplementalGroups: [666, 777]
  containers:
  - name: first
    image: alpine
    command: ["/bin/sleep", "999999"]
    securityContext:
      runAsUser: 1111
    volumeMounts:
    - name: shared-volume
      mountPath: /volume
      readOnly: false
  - name: second
    image: alpine
    command: ["/bin/sleep", "999999"]
    securityContext:
      runAsUser: 2222
    volumeMounts:
    - name: shared-volume
      mountPath: /volume
      readOnly: false
  volumes:
  - name: shared-volume
    emptyDir:

Create a pod with two containers and different group permissions:

kubectl apply -f group-context.yaml

# Open a shell to the first container on that pod:
kubectl exec -it group-context -c first sh

Kubernetes Applications and Development

Termination Message

apiVersion: v1
kind: Pod
metadata:
  name: pod2
spec:
  containers:
  - image: busybox
    name: main
    command:
    - sh
    - -c
    - 'echo "I''ve had enough" > /var/termination-reason ; exit 1'
    terminationMessagePath: /var/termination-reason

Miscellaneous

kubectl get nodes --show-labels | grep small | awk {'print $1'} > nodes.small
kubectl get pods --all-namespaces -o wide | grep -f nodes.small

Create a job from a CronJob

kubectl get CronJob -n logging
kubectl -n logging create job --from=cronjob/curator-elasticsearch-curator run-now

rewrite nginx

apiVersion: extensions/v1beta
kind: Ingress
metadata:
  name: benjamin-maynard-io-fe
  annotations:
    ingress.kubernetes.io/ssl-redirect: "true"
    kubernetes.io/tls-acme: "true"
    certmanager.k8s.io/issuer: letsencrypt-prod
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
    nginx.ingress.kubernetes.io/configuration-snippet: |
      if ($host = 'www.devops-training.io' ) {
        rewrite ^ https://devops-training.io$request_uri permanent;
      }
spec:
  tls:
  - hosts:
    - devops-training.io
    - www.devops-training.io
    secretName: devops-training
  rules:
  - host: devops-training.io
    http:
      paths:
      - path: /
        backend:
          serviceName: devops-training
          servicePort: 80
  - host: www.devops-training.io
    http:
      paths:
      - path: /
        backend:
          serviceName: devops-training
          servicePort: 80
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment