Skip to content

Instantly share code, notes, and snippets.

@4n3w
Created June 23, 2023 19:28
Show Gist options
  • Save 4n3w/93f421bcb171af0cee1b5ed333967aa6 to your computer and use it in GitHub Desktop.
Save 4n3w/93f421bcb171af0cee1b5ed333967aa6 to your computer and use it in GitHub Desktop.

Dealing with Deadicated Nodepools

Let's say you have a set of disparate nodepools in a cluster:

kubectl get nodes
NAME                                 STATUS   ROLES   AGE   VERSION
aks-nodepool1-26541295-vmss000004    Ready    agent   28h   v1.26.3
aks-nodepool1-26541295-vmss000005    Ready    agent   28h   v1.26.3
aks-nodepool1-26541295-vmss000006    Ready    agent   28h   v1.26.3
aks-nodepool1-26541295-vmss000007    Ready    agent   28h   v1.26.3
aks-poisonpool-25406481-vmss000000   Ready    agent   25h   v1.26.3
aks-poisonpool-25406481-vmss000001   Ready    agent   25h   v1.26.3
aks-poisonpool-25406481-vmss000002   Ready    agent   25h   v1.26.3
aks-poisonpool-25406481-vmss000003   Ready    agent   25h   v1.26.3

You'll notice we've got two distinct nodepools here, nodepool1 and poisonpool.

Let's say we want to ensure that TAP is only installed on the nodepool1 nodes. We can do this by adding a taint to the nodepool1 nodes:

kubectl taint nodes aks-nodepool1-26541295-vmss000004 reservedfor=tap:NoSchedule
kubectl taint nodes aks-nodepool1-26541295-vmss000005 reservedfor=tap:NoSchedule
kubectl taint nodes aks-nodepool1-26541295-vmss000006 reservedfor=tap:NoSchedule
kubectl taint nodes aks-nodepool1-26541295-vmss000007 reservedfor=tap:NoSchedule

We'll have to add the inverse taint to the poisonpool nodes:

kubectl taint nodes aks-poisonpool-25406481-vmss000000 reservedfor=notap:NoSchedule
kubectl taint nodes aks-poisonpool-25406481-vmss000001 reservedfor=notap:NoSchedule
kubectl taint nodes aks-poisonpool-25406481-vmss000002 reservedfor=notap:NoSchedule
kubectl taint nodes aks-poisonpool-25406481-vmss000003 reservedfor=notap:NoSchedule

The following ytt will add the annotation kapp.k14s.io/exists to all namespaces. Why are we doing this? In summary, this ytt overlay is designed to add the kapp.k14s.io/exists annotation to all Namespace resources within the tap-install namespace. The annotation is added if it doesn't already exist. This overlay modification can be useful in various scenarios, such as tracking the existence of a Namespace resource within your Kubernetes environment.

apiVersion: v1
kind: Secret
metadata:
  name: ns-kapp-exists-overlay
  namespace: tap-install
stringData:
  annotate-namespace-with-exists.yaml: |
    #@ load("@ytt:overlay", "overlay")
    #@overlay/match by=overlay.subset({"kind": "Namespace"}), expects="0+"
    ---
    metadata:
      #@overlay/match missing_ok=True
      annotations:
        #@overlay/match missing_ok=True
        kapp.k14s.io/exists: ""

Add that overlay Secret to the cluster:

kubectl apply -f ns-kapp-exists-overlay.yaml

Let's ensure that we're creating all of these namespaces ahead of time so we're targeting the correct toleration:

namespaces=(accelerator-system api-auto-registration api-portal appliveview-tokens-system app-live-view-connector app-live-view-conventions app-live-view appsso build-service cert-injection-webhook stacks-operator-system kpack cartographer-system cert-manager knative-serving crossplane-system developer-conventions triggermesh knative-eventing knative-sources vmware-sources flux-system learning-center-guided-ui learningcenter metadata-store tap-namespace-provisioning cosign-system scan-link-system service-bindings services-toolkit source-system spring-boot-convention tap-gui tap-telemetry vmware-system-telemetry tekton-pipelines-resolvers tekton-pipelines tap-install kapp-controller secretgen-controller tanzu-cluster-essentials tanzu-package-repo-global tap-workload dev)

for ns in ${namespaces[@]}; do
  kubectl create ns "$ns"
  kubectl annotate ns "$ns" 'scheduler.alpha.kubernetes.io/defaultTolerations'='[{"operator": "Equal", "effect": "NoSchedule", "key": "reservedfor", "value": "tap"}]'
done

We'll have to now account for that annotation in our TAP installation. We'll do that by adding the following to our TAP values:

shared:
  ingress_domain: tap.az.anew.io
  image_registry:
    project_path: "registry.az.anew.io/tap-build"
    username: "registry-username-example"
    password: "hunter2"

profile: full

ceip_policy_disclosed: true

supply_chain: basic

scanning:
  metadataStore:
    url:

contour:
  envoy:
    service:
      type: LoadBalancer
      annotations:
        service.beta.kubernetes.io/azure-load-balancer-internal: "true"
        service.beta.kubernetes.io/azure-load-balancer-internal-subnet: "aks-subnet"
        service.beta.kubernetes.io/azure-load-balancer-internal-vnet: "aks-vnet"
        service.beta.kubernetes.io/azure-load-balancer-internal-dns-label-name: "tap-contour"
        service.beta.kubernetes.io/azure-load-balancer-internal-dns-zone-name: "tap.az.anew.io"
        
accelerator:
  ingress:
    include: true
    enable_tls: false

appliveview:
  ingressEnabled: true

appliveview_connector:
  backend:
    ingressEnabled: true
    sslDeactivated: false

package_overlays:
  - name: accelerator
    secrets:
      - name: ns-kapp-exists-overlay
  - name: api-auto-registration
    secrets:
      - name: ns-kapp-exists-overlay
  - name: api-portal
    secrets:
      - name: ns-kapp-exists-overlay
  - name: appliveview
    secrets:
      - name: ns-kapp-exists-overlay
  - name: appliveview-apiserver
    secrets:
      - name: ns-kapp-exists-overlay
  - name: appliveview-connector
    secrets:
      - name: ns-kapp-exists-overlay
  - name: appliveview-conventions
    secrets:
      - name: ns-kapp-exists-overlay
  - name: appsso
    secrets:
      - name: ns-kapp-exists-overlay
  - name: bitnami-services
    secrets:
      - name: ns-kapp-exists-overlay
  - name: buildservice
    secrets:
      - name: ns-kapp-exists-overlay
  - name: cartographer
    secrets:
      - name: ns-kapp-exists-overlay
  - name: cert-manager
    secrets:
      - name: ns-kapp-exists-overlay
  - name: cnrs
    secrets:
      - name: ns-kapp-exists-overlay
  - name: crossplane
    secrets:
      - name: ns-kapp-exists-overlay
  - name: developer-conventions
    secrets:
      - name: ns-kapp-exists-overlay
  - name: eventing
    secrets:
      - name: ns-kapp-exists-overlay
  - name: fluxcd-source-controller
    secrets:
      - name: ns-kapp-exists-overlay
  - name: grype
    secrets:
      - name: ns-kapp-exists-overlay
  - name: learningcenter
    secrets:
      - name: ns-kapp-exists-overlay
  - name: learningcenter-workshops
    secrets:
      - name: ns-kapp-exists-overlay
  - name: metadata-store
    secrets:
      - name: ns-kapp-exists-overlay
  - name: namespace-provisioner
    secrets:
      - name: ns-kapp-exists-overlay
  - name: ootb-delivery-basic
    secrets:
      - name: ns-kapp-exists-overlay
  - name: ootb-supply-chain-basic
    secrets:
      - name: ns-kapp-exists-overlay
  - name: ootb-templates
    secrets:
      - name: ns-kapp-exists-overlay
  - name: policy-controller
    secrets:
      - name: ns-kapp-exists-overlay
  - name: scanning
    secrets:
      - name: ns-kapp-exists-overlay
  - name: contour
    secrets:
      - name: ns-kapp-exists-overlay
  - name: service-bindings
    secrets:
      - name: ns-kapp-exists-overlay
  - name: services-toolkit
    secrets:
      - name: ns-kapp-exists-overlay
  - name: source-controller
    secrets:
      - name: ns-kapp-exists-overlay
  - name: spring-boot-conventions
    secrets:
      - name: ns-kapp-exists-overlay
  - name: tap-auth
    secrets:
      - name: ns-kapp-exists-overlay
  - name: tap-gui
    secrets:
      - name: ns-kapp-exists-overlay
  - name: tekton-pipelines
    secrets:
      - name: ns-kapp-exists-overlay

Note that we've added the ns-kapp-exists-overlay Secret to all the packages that we want to install. This will ensure that the overlay is applied to all the packages that we install.

Now, we can install TAP on the cluster, and it will only be installed on the nodes that don't have the reservedfor=nontap:NoSchedule taint:

tanzu package install tap -p tap.tanzu.vmware.com -v 1.5.0 --values-file tap-values.yaml -n tap-install
@x95castle1
Copy link

We might want an example of using the overlay by adding in the defaultTolerations to ytt overlay as I think that might be a more common pattern vs what we are showing how to handle if the namespace already exists?

@x95castle1
Copy link

#@ load("@ytt:overlay", "overlay")
#@overlay/match by=overlay.subset({"kind": "Namespace"}), expects="0+"

metadata:
#@overlay/match missing_ok=True
annotations:
#@overlay/match missing_ok=True
scheduler.alpha.kubernetes.io/defaultTolerations: '[{"operator": "Equal", "effect": "NoSchedule", "key": "reservedfor", "value": "tap"}]'

@4n3w
Copy link
Author

4n3w commented Jul 27, 2023

Good point, thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment