Skip to content

Instantly share code, notes, and snippets.

@jhidalgo3
Forked from tuannvm/cka.md
Last active June 10, 2021 09:34
Show Gist options
  • Save jhidalgo3/a219edeaebce705881c91208760f8ef4 to your computer and use it in GitHub Desktop.
Save jhidalgo3/a219edeaebce705881c91208760f8ef4 to your computer and use it in GitHub Desktop.
#Helm #Kubernetes #cheatsheet, happy helming!

Helm cheatsheet

Get started


Helm concepts

The official docs do a good job of explaining the basic concepts. Some important points are shown in the table below:

Helm Concept Description Important point
Chart (unpackaged) A folder with files that follow the Helm chart guidelines. Can be deployed directly to a cluster
Chart (packaged) A tar.gz archive of the above Can be deployed directly to a cluster
Chart name Name of the package as defined in Chart.yaml Part of package identification
Templates A set of Kubernetes manifests that form an application Go templates can be used
Values Settings that can be parameterized in Kubernetes manifests Used for templating of manifests
Chart version The version of the package/chart Part of package identification
App version The version of the application contained in the chart Independent from chart version
Release A deployed package in a Kubernetes cluster Multiple releases of the same chart can be active
Release name An arbitrary name given to the release Independent from name of chart
Release Revision A number that gets incremented each time an application is deployed/upgraded Unrelated to chart version
Repository A file structure (HTTP server) with packages and an index.yaml file Helm charts can be deployed without being fetched from a repository first
Installing Creating a brand-new release from a Helm chart (either unpackaged, packaged or from a repo)
Upgrading Changing an existing release in a cluster Can be upgraded to any version (even the same)
Rolling back Going back to a previous revision of a release Helm handles the rollback, no need to re-rerun pipeline
Pushing Storing a Helm package on a repository ### Helm repositories are optional

Using Helm repositories is a recommended practice, but completely optional. You can deploy a Helm chart to a Kubernetes cluster directly from the filesystem. The quick start guide actually shows this scenario.

Helm can install a chart either in the package (.tgz) or unpackaged form (tree of files) to a Kubernetes cluster right away.

Chart will be automatically packaged | Fetching | Downloading a Helm package from a repository to the local filesystem | |

Helm repositories are optional

Using Helm repositories is a recommended practice, but completely optional. You can deploy a Helm chart to a Kubernetes cluster directly from the filesystem. The quick start guide actually shows this scenario.

Helm can install a chart either in the package (.tgz) or unpackaged form (tree of files) to a Kubernetes cluster right away.

Chart versions and appVersions

Each Helm chart has the ability to define two separate versions:

  1. The version of the chart itself (version field in Chart.yaml).
  2. The version of the application contained in the chart (appVersion field in Chart.yaml).

These are unrelated and can be bumped up in any manner that you see fit. You can sync them together or have them increase independently. There is no right or wrong practice here as long as you stick into one. We will see some versioning strategies in the next section.

Charts and sub-charts

The most basic way to use Helm is by having a single chart that holds a single application. The single chart will contain all the resources needed by your application such as deployments, services, config-maps etc.

However, you can also create a chart with dependencies to other charts (a.k.a. umbrella chart) which are completely external using the requirements.yaml file. Using this strategy is optional and can work well in several organizations. Again, there is no definitive answer on right and wrong here, it depends on your team process.

{% include image.html lightbox="true" file="/images/kubernetes-helm/best-practices/chart-structure.png" url="/images/kubernetes-helm/best-practices/chart-structure.png" alt="Possible Helm structures" caption="Possible Helm structures" max-width="70%" %}

Struture

.
├── Chart.yaml --> metadata info
├── README.md
├── requirements.yaml --> define dependencies
├── templates
│   ├── spark-master-deployment.yaml --> configuration with template supported
│   ├── spark-worker-deployment.yaml
│   └── spark-zeppelin-deployment.yaml
│   └── NOTES.txt --> display when run "helm chart"
│   └── _helpers.tpl --> template handler
└── values.yaml --> variable list, will be interpolated on templates file during deployment
│
└── charts
    ├── apache/
        ├── Chart.yaml
  • Chart.yaml
  name: The name of the chart (required)
  version: A SemVer 2 version (required)
  description: A single-sentence description of this project (optional)
  keywords:
    - A list of keywords about this project (optional)
  home: The URL of this project's home page (optional)
  sources:
    - A list of URLs to source code for this project (optional)
  maintainers: # (optional)
    - name: The maintainer's name (required for each maintainer)
      email: The maintainer's email (optional for each maintainer)
  engine: gotpl # The name of the template engine (optional, defaults to gotpl)
  icon: A URL to an SVG or PNG image to be used as an icon (optional).
  appVersion: The version of the app that this contains (optional). This needn't be SemVer.
  deprecated: Whether or not this chart is deprecated (optional, boolean)
  tillerVersion: The version of Tiller that this chart requires. This should be expressed as a SemVer range: ">2.0.0" (optional)
  • requirements.yaml

Adding an alias for a dependency chart would put a chart in dependencies using alias as name of new dependency. Condition - The condition field holds one or more YAML paths (delimited by commas). If this path exists in the top parent's values and resolves to a boolean value, the chart will be enabled or disabled based on that boolean value. Only the first valid path found in the list is evaluated and if no paths exist then the condition has no effect. Tags - The tags field is a YAML list of labels to associate with this chart. In the top parent's values, all charts with tags can be enabled or disabled by specifying the tag and a boolean value. Conditions (when set in values) always override tags

  dependencies:
  - name: apache
    version: 1.2.3
    repository: http://example.com/charts
    alias: new-subchart-1
    condition: subchart1.enabled, global.subchart1.enabled
        tags:
          - front-end
          - subchart1

  - name: mysql
    version: 3.2.1
    repository: http://another.example.com/charts
    alias: new-subchart-2
    condition: subchart2.enabled,global.subchart2.enabled
        tags:
          - back-end
          - subchart1

General Usage

  helm list --all
  helm repo (list|add|update)
  helm search
  helm inspect <chart-name>
  hem install --set a=b -f config.yaml <chart-name> -n <release-name> # --set take precedented, merge into -f
  helm status <deployment-name>
  helm delete <deployment-name>
  helm inspect values <chart-name>
  helm upgrade -f config.yaml <deployment-name> <chart-name>
  helm rollback <deployment-name> <version>

  helm create <chart-name>
  helm package --app-version <appVersion> <chart-name>
  helm lint <chart-name>

  helm dep up <chart-name> # update dependency
  helm get manifest <deployment-name> # prints out all of the Kubernetes resources that were uploaded to the server
  helm install --debug --dry-run <deployment-name> # it will return the rendered template to you so you can see the output
  • --set outer.inner=value is translated into this:
  outer:
  inner: value
  • --set servers[0].port=80,servers[0].host=example:
  servers:
  - port: 80
    host: example
  • --set name={a, b, c} translates to:
  name:
  - a
  - b
  - c
  • --set name=value1,value2:
  name: "value1,value2"
  • --set nodeSelector."kubernetes.io/role"=master
  nodeSelector:
  kubernetes.io/role: master
  • --set livenessProbe.exec.command=[cat,docroot/CHANGELOG.txt] --set livenessProbe.httpGet=null
livenessProbe:
-  httpGet:
-    path: /user/login
-    port: http
  initialDelaySeconds: 120
+  exec:
+    command:
+    - cat
+    - docroot/CHANGELOG.txt
  • --timeout
  • --wait
  • --no-hooks
  • --recreate-pods

Template

Values that are supplied via a values.yaml file (or via the --set flag) are accessible from the .Values object in a template

Release.Name:
Release.Time:
Release.Namespace: The namespace the chart was released to.
Release.Service: The service that conducted the release. Usually this is Tiller.
Release.IsUpgrade: This is set to true if the current operation is an upgrade or rollback.
Release.IsInstall: This is set to true if the current operation is an install.
Release.Revision: The revision number. It begins at 1, and increments with each helm upgrade.
Chart: The contents of the Chart.yaml. Thus, the chart version is obtainable as "Chart.Version" and the maintainers are in "Chart.Maintainers".
Files: Files can be accessed using {{index .Files "file.name"}} or using the "{{.Files.Get name}}" or "{{.Files.GetString name}}" functions. You can also access the contents of the file as []byte using "{{.Files.GetBytes}}"
Capabilities: "({{.Capabilities.KubeVersion}}", Tiller "({{.Capabilities.TillerVersion}}", and the supported Kubernetes API versions "({{.Capabilities.APIVersions.Has "batch/v1")"

{{.Files.Get config.ini}}
{{.Files.GetBytes}} useful for things like images

{{.Template.Name}}
{{.Template.BasePath}}
  • default value
{{default "minio" .Values.storage}}

//same
{{ .Values.storage | default "minio" }}
  • put a quote outside
heritage: {{.Release.Service | quote }}

# same result
heritage: {{ quote .Release.Service }}
  • global variable
global:
  app: MyWordPress

// could be access as "{{.Values.global.app}}"
  • Includes a template called mytpl.tpl, then lowercases the result, then wraps that in double quotes
value: {{include "mytpl.tpl" . | lower | quote}}
  • required function declares an entry for .Values.who is required, and will print an error message when that entry is missing
value: {{required "A valid .Values.who entry required!" .Values.who }}
  • The sha256sum function can be used together with the include function to ensure a deployments template section is updated if another spec changes
kind: Deployment
spec:
  template:
    metadata:
      annotations:
        checksum/config: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}
[...]
  • The annotation "helm.sh/resource-policy": keep instructs Tiller to skip this resource during a helm delete operation

  • In the templates/ directory, any file that begins with an underscore(_) is not expected to output a Kubernetes manifest file. So by convention, helper templates and partials are placed in a _helpers.tpl file.

Hooks

Read more

  • include these annotation inside hook yaml file, for e.g templates/post-install-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
  annotations:
    # This is what defines this resource as a hook. Without this line, the
    # job is considered part of the release.
    "helm.sh/hook": post-install, post-upgrade
    "helm.sh/hook-weight": "-5"

Chart repository

Read more

Signing

Read more

Test

Read more

Flow Control


If/Else

{{ if PIPELINE }}
  # Do something
{{ else if OTHER PIPELINE }}
  # Do something else
{{ else }}
  # Default case
{{ end }}

data:
  myvalue: "Hello World"
  drink: {{ .Values.favorite.drink | default "tea" | quote }}
  food: {{ .Values.favorite.food | upper | quote }}
  {{- if eq .Values.favorite.drink "lemonade" }}
  mug: true
  {{- end }} # notice the "-" in the left, if will help eliminate newline before variable

With

with can allow you to set the current scope (.) to a particular object

data:
  myvalue: "Hello World"
  {{- with .Values.favorite }}
  drink: {{ .drink | default "tea" | quote }}
  food: {{ .food | upper | quote }}
  {{- end }} # instead of writing ".Values.favorite.drink"

Inside of the restricted scope, you will not be able to access the other objects from the parent scope

Range

# predefined variable
pizzaToppings:
  - mushrooms
  - cheese
  - peppers
  - onions

toppings: |-
    {{- range $i, $val := .Values.pizzaTopping }}
    - {{ . | title | quote }}  # upper first character, then quote
    {{- end }}

sizes: |-
    {{- range tuple "small" "medium" "large" }}
    - {{ . }}
    {{- end }} # make a quick list

Variables

It follows the form $name. Variables are assigned with a special assignment operator: :=

data:
  myvalue: "Hello World"
  {{- $relname := .Release.Name -}}
  {{- with .Values.favorite }}
  drink: {{ .drink | default "tea" | quote }}
  food: {{ .food | upper | quote }}
  release: {{ $relname }}
  {{- end }}

# use variable in range
 toppings: |-
    {{- range $index, $topping := .Values.pizzaToppings }}
      {{ $index }}: {{ $topping }}
    {{- end }}

#toppings: |-
#      0: mushrooms
#      1: cheese
#      2: peppers
#      3: onions

{{- range $key,$value := .Values.favorite }}
  {{ $key }}: {{ $value }}
  {{- end }} # instead of specify the key, we can actually loop through the values.yaml file and print values

There is one variable that is always global - $ - this variable will always point to the root context

...
labels:
    # Many helm templates would use `.` below, but that will not work,
    # however `$` will work here
    app: {{ template "fullname" $ }}
    # I cannot reference .Chart.Name, but I can do $.Chart.Name
    chart: "{{ $.Chart.Name }}-{{ $.Chart.Version }}"
    release: "{{ $.Release.Name }}"
    heritage: "{{ $.Release.Service }}"
...

Named Templates

template names are global

# _helpers.tpl
{{/* Generate basic labels */}}
{{- define "my_labels" }}
  labels:
    generator: helm
    date: {{ now | htmlDate }}
    version: {{ .Chart.Version }}
    name: {{ .Chart.Name }}
{{- end }}

When a named template (created with define) is rendered, it will receive the scope passed in by the template call.

# configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ .Release.Name }}-configmap
  {{- template "my_labels" . }} # Notice the final dot, it will pass the global scope inside template file. Without it version & name will not be �generated.
  {{- include "my_labels" . | indent 2 }} # similar to "template" directive, have the ability to control indentation

referable to use include over template. Because template is an action, and not a function, there is no way to pass the output of a template call to other functions; the data is simply inserted inline.

Files inside Templates

# file located at parent folder
# config1.toml: |-
#   message = config 1 here
# config2.toml: |-
#   message = config 2 here
# config3.toml: |-
#   message = config 3 here

data:
  {{- $file := .Files }} # set variable
  {{- range tuple "config1.toml" "config2.toml" "config3.toml" }} # create list
  {{ . }}: |- # config file name
    {{ $file.Get . }} # get file's content
  {{- end }}

Glob-patterns & encoding

apiVersion: v1
kind: ConfigMap
metadata:
  name: conf
data:
+{{ (.Files.Glob "foo/*").AsConfig | indent 2 }}
---
apiVersion: v1
kind: Secret
metadata:
  name: very-secret
type: Opaque
data:
+{{ (.Files.Glob "bar/*").AsSecrets | indent 2 }}

+token: |-
+  {{ .Files.Get "config1.toml" | b64enc }}

YAML reference

# Force type
age: !!str 21
port: !!int "80"

# Fake first line to preserve integrity
coffee: | # �no strip
  # Commented first line
         Latte
  Cappuccino
  Espresso

coffee: |- # strip off trailing newline
  Latte
  Cappuccino
  Espresso

coffee: |+ # preserve trailing newline
  Latte
  Cappuccino
  Espresso


another: value

myfile: | # insert static file
{{ .Files.Get "myfile.txt" | indent 2 }}

coffee: > �# treat as one long line
  Latte
  Cappuccino
  Espresso

Prerequisite user OCI experimental

export HELM_EXPERIMENTAL_OCI=1

Export chart

  1. From Helm3 repository

Add repo helm and update repo index

helm repo add repo repo_url
helm repo update

Download chart tgz

helm fetch repo/chart --version <2.0.0>

Download and export chart

helm pull repo/chart:version --untar --destination <DIR>
  1. From Helm3 OCI repository
helm chart pull repo/chart:version
helm chart export repo/chart:version --destination <DIR>

Get values from a release

helm get values <RELEASE_NAME>

Kubernetes cheatsheet

Getting Started

  • Fault tolerance
  • Rollback
  • Auto-healing
  • Auto-scaling
  • Load-balancing
  • Isolation (sandbox)

Sample yaml

apiVersion: <>
kind: <>
metadata:
  name: <>
  labels:
    ...
  annotations:
    ...
spec:
  containers:
    ...
  initContainers:
    ...
  priorityClassName: <>

Workflow

  • (kube-scheduler, controller-manager, etcd) --443--> API Server

  • API Server --10055--> kubelet

    • non-verified certificate
    • MITM
    • Solution:
      • set kubelet-certificate-authority
      • ssh tunneling
  • API server --> (nodes, pods, services)

    • Plain HTTP (unsafe)

Physical components

Master

  • API Server (443)
  • kube-scheduler
  • controller-manager
    • cloud-controller-manager
    • kube-controller-manager
  • etcd

Other components talk to API server, no direct communication

Node

  • Kubelet

  • Container Engine

    • CRI
      • The protocol which used to connect between Kubelet & container engine
  • Kube-proxy

Everything is an object - persistent entities

  • maintained in etcd, identified using

    • names: client-given
    • UIDs: system-generated
  • Both need to be unique

  • three management methods

    • Imperative commands (kubectl)
    • Imperative object configuration (kubectl + yaml)
      • repeatable
      • observable
      • auditable
    • Declarative object configuration (yaml + config files)
      • Live object configuration
      • Current object configuration file
      • Last-applied object configuration file
      Node Capacity
---------------------------
|     kube-reserved       |
|-------------------------|
|     system-reserved     |
|-------------------------|
|    eviction-threshold   |
|-------------------------|
|                         |
|      allocatable        |
|   (available for pods)  |
|                         |
|                         |
---------------------------

Namespaces

  • Three pre-defined

    • default
    • kube-system
    • kube-public: auto-readable by all users
  • Objects without namespaces

    • Nodes
    • PersistentVolumes
    • Namespaces

Labels

  • key / value
  • loose coupling via selectors
  • need not be unique

ClusterIP

  • Independent of lifespan of any backend pod
  • Service object has a static port assigned to it

Controller manager

  • ReplicaSet, deployment, daemonset, statefulSet
  • Actual state <-> desired state
  • reconciliation loop

Kube-scheduler

  • nodeSelector
  • Affinity & Anti-Affinity
    • Node
      • Steer pod to node
    • Pod
      • Steer pod towards or away from pods
  • Taints & tolerations (anti-affinity between node and pod!)
    • Base on predefined configuration (env=dev:NoSchedule)
      ...
      tolerations:
      - key: "dev"
        operator: "equal"
        value: "env"
        effect: NoSchedule
      ...
    • Base on node condition (alpha in v1.8)
      • taints added by node controller

Pod

kubectl run name --image=<image>

What's available inside the container?

  • File system
    • Image
    • Associated Volumes
      • ordinary
      • persistent
    • Container
      • Hostname
    • Pod
      • Pod name
      • User-defined envs
    • Services
      • List of all services

Access with:

  • Symlink (important):

    • /etc/podinfo/labels
    • /etc/podinfo/annotations
  • Or:

volumes:
  - name: podinfo
    downwardAPI:
      items:
        - path: "labels"
          fieldRef:
            fieldPath: metadata.labels
        - path: "annotations"
          fieldRef:
            fieldPath: metadata.annotations

Status

  • Pending
  • Running
  • Succeeded
  • Failed
  • Unknown

Probe

  • Liveness
    • Failed? Restart policy applied
  • Readiness
    • Failed? Removed from service

Pod priorities

  • available since 1.8
  • PriorityClass object
  • Affect scheduling order
    • High priority pods could jump the queue
  • Preemption
    • Low priority pods could be pre-empted to make way for higher one (if no node is available for high priority)
    • These preempted pods would have a graceful termination period

Multi-Container Pods

  • Share access to memory space
  • Connect to each other using localhost
  • Share access to the same volume
  • entire pod is host on the same node
  • all in or nothing
  • no auto healing or scaling

Init containers

  • run before app containers
  • always run to completion
  • run serially

Lifecycle hooks

  • PostStart
  • PreStop (blocking)

Handlers:

  • Exec
  • HTTP
...
spec:
  containers:
    lifecycle:
      postStart:
        exec:
          command: <>
      preStop:
        http:
...

Could invoke multiple times

Quality of Service (QoS)

When Kubernetes creates a Pod it assigns one of these QoS classes to the Pod:

  • Guaranteed (all containers have limits == requests)

If a Container specifies its own memory limit, but does not specify a memory request, Kubernetes automatically assigns a memory request that matches the limit. Similarly, if a Container specifies its own cpu limit, but does not specify a cpu request, Kubernetes automatically assigns a cpu request that matches the limit.

  • Burstable (at least 1 has limits or requests)
  • BestEffort (no limits or requests)

PodPreset

You can use a podpreset object to inject information like secrets, volume mounts, and environment variables etc into pods at creation time. This task shows some examples on using the PodPreset resource

apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
metadata:
  name: allow-database
spec:
  selector:
    matchLabels:
      role: frontend
  env:
    - name: DB_PORT
      value: "6379"
  volumeMounts:
    - mountPath: /cache
      name: cache-volume
  volumes:
    - name: cache-volume
      emptyDir: {}

ReplicaSet

Features:

  • Scaling and healing
  • Pod template
  • number of replicas

Components:

  • Pod template

  • Pod selector (could use matchExpressions)

  • Label of replicaSet

  • Number of replica

  • Could delete replicaSet without its pods using --cascade =false

  • Isolating pods from replicaSet by changing its labels

Deployments

  • versioning and rollback

  • Contains spec of replicaSet within it

  • advanced deployment

  • blue-green

  • canary

  • Update containers --> new replicaSet & new pods created --> old RS still exists --> reduced to zero

  • Every change is tracked

  • Append --record in kubectl to keep history

  • Update strategy

    • Recreate
      • Old pods would be killed before new pods come up
    • RollingUpdate
      • progressDeadlineSeconds
      • minReadySeconds
      • rollbackTo
      • revisionHistoryLimit
      • paused
        • spec.Paused
  • kubectl rollout undo deployment/<> --to-revision=<>

  • kubectl rollout statua deployment/<>

  • kubectl set image deployment/<> <>=<>:<>

  • kubectl rollout resume/pause <>

ReplicationController

  • RC = ( RS + deployment ) before
  • Obsolete

DaemonSet

  • Ensure all nodes run a copy of pod
  • Cluster storage, log collection, node monitor ...

StatefulSet

  • Maintains a sticky identity
  • Not interchangeable
  • Identifier maintains across any rescheduling

Limitation

  • volumes must be pre-provisioned
  • Deleting / Scaling will not delete associated volumes

Flow

  • Deployed 0 --> (n-1)
  • Deleted (n-1) --> 0 (successor must be completely shutdown before proceed)
  • Must be all ready and running before scaling happens

Job (batch/v1)

  • Non-parallel jobs
  • Parallel jobs
    • Fixed completion count
      • job completes when number of completions reaches target
    • With work queue
      • requires coordination
  • Use spec.activeDeadlineSeconds to prevent infinite loop

Cronjob

  • Job should be idempotent

Horizontal pod autoscaler

  • Targets: replicaControllers, deployments, replicaSets
  • CPU or custom metrics
  • Won't work with non-scaling objects: daemonSets
  • Prevent thrashing (upscale/downscale-delay)

Services

  • Logical set of backend pods + frontend

  • Frontend: static IP + port + dns name

  • Backend: set of backend pods (via selector)

  • Static IP and networking.

  • Kube-proxy route traffic to VIP.

  • Automatically create endpoint based on selector.

  • CluterIP

  • NodePort

    • external --> NodeIP + NodePort --> kube-proxy --> ClusterIP
  • LoadBalancer

    • Need to have cloud-controller-manager
      • Node controller
      • Route controller
      • Service controller
      • Volume controller
    • external --> LB --> NodeIP + NodePort --> kube-proxy --> ClusterIP
  • ExternalName

    • Can only resolve with kube-dns
    • No selector

Service discovery

  • SRV record for named port
    • port-name.port-protocol.service-name.namespace.svc.cluster.local
  • Pod domain
    • pod-ip-address.namespace.pod.cluster.local
    • hostname is metadata.name

spec.dnsPolicy

  • default
    • inherit node's name resolution
  • ClusterFirst
    • Any DNS query that does not match the configured cluster domain suffix, such as “www.kubernetes.io”, is forwarded to the upstream nameserver inherited from the node
  • ClusterFirstWithHostNet
    • if host network = true
  • None (since k8s 1.9)
    • Allow custom dns server usage

Headless service

  • with selector? --> associate with pods in cluster
  • without selector? --> forward to externalName

Could specify externalIP to service

Volumes

Lifetime longer than any containers inside a pod.

4 types:

  • configMap

  • emptyDir

    • share space / state across containers in same pod
    • containers can mount at different times
    • pod crash --> data lost
    • container crash --> ok
  • gitRepo

  • secret

    • store on RAM
  • hostPath

Persistent volumes

Role-Based Access Control (RBAC)

  • Role
    • Apply on namespace resources
  • ClusterRole
    • cluster-scoped resources (nodes,...)
    • non-resources endpoint (/healthz)
    • namespace resources across all namespaces

Custom Resource Definitions

CustomResourceDefinitions themselves are non-namespaced and are available to all namespaces.

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  # name must match the spec fields below, and be in the form: <plural>.<group>
  name: crontabs.stable.example.com
spec:
  # group name to use for REST API: /apis/<group>/<version>
  group: stable.example.com
  # version name to use for REST API: /apis/<group>/<version>
  version: v1
  # either Namespaced or Cluster
  scope: Namespaced
  names:
    # plural name to be used in the URL: /apis/<group>/<version>/<plural>
    plural: crontabs
    # singular name to be used as an alias on the CLI and for display
    singular: crontab
    # kind is normally the CamelCased singular type. Your resource manifests use this.
    kind: CronTab
    # shortNames allow shorter string to match your resource on the CLI
    shortNames:
    - ct
    # categories is a list of grouped resources the custom resource belongs to.
    categories:
    - all
  validation:
   # openAPIV3Schema is the schema for validating custom objects.
    openAPIV3Schema:
      properties:
        spec:
          properties:
            cronSpec:
              type: string
              pattern: '^(\d+|\*)(/\d+)?(\s+(\d+|\*)(/\d+)?){4}$'
            replicas:
              type: integer
              minimum: 1
              maximum: 10
  # subresources describes the subresources for custom resources.
  subresources:
    # status enables the status subresource.
    status: {}
    # scale enables the scale subresource.
    scale:
      # specReplicasPath defines the JSONPath inside of a custom resource that corresponds to Scale.Spec.Replicas.
      specReplicasPath: .spec.replicas
      # statusReplicasPath defines the JSONPath inside of a custom resource that corresponds to Scale.Status.Replicas.
      statusReplicasPath: .status.replicas
      # labelSelectorPath defines the JSONPath inside of a custom resource that corresponds to Scale.Status.Selector.
      labelSelectorPath: .status.labelSelector

Notes

Basic commands

# show current context
kubectl config current-context

# get specific resource
kubectl get (pod|svc|deployment|ingress) <resource-name>

# Get pod logs
kubectl logs -f <pod-name>

# Get nodes list
kubectl get no -o custom-columns=NAME:.metadata.name,AWS-INSTANCE:.spec.externalID,AGE:.metadata.creationTimestamp

# Run specific command | Drop to shell
kubectl exec -it <pod-name> <command>

# Describe specific resource
kubectl describe (pod|svc|deployment|ingress) <resource-name>

# Set context
kubectl config set-context $(kubectl config current-context) --namespace=<namespace-name>

# Run a test pod
kubectl run -it --rm --generator=run-pod/v1 --image=alpine:3.6 tuan-shell -- sh
  • from @so0k link

  • access dashboard

# bash
kubectl -n kube-system port-forward $(kubectl get pods -n kube-system -o wide | grep dashboard | awk '{print $1}') 9090

# fish
kubectl -n kube-system port-forward (kubectl get pods -n kube-system -o wide | grep dashboard | awk '{print $1}') 9090

jsonpath

From link

{
  "kind": "List",
  "items":[
    {
      "kind":"None",
      "metadata":{"name":"127.0.0.1"},
      "status":{
        "capacity":{"cpu":"4"},
        "addresses":[{"type": "LegacyHostIP", "address":"127.0.0.1"}]
      }
    },
    {
      "kind":"None",
      "metadata":{"name":"127.0.0.2"},
      "status":{
        "capacity":{"cpu":"8"},
        "addresses":[
          {"type": "LegacyHostIP", "address":"127.0.0.2"},
          {"type": "another", "address":"127.0.0.3"}
        ]
      }
    }
  ],
  "users":[
    {
      "name": "myself",
      "user": {}
    },
    {
      "name": "e2e",
      "user": {"username": "admin", "password": "secret"}
    }
  ]
}
Function Description Example Result
text the plain text kind is {.kind} kind is List
@ the current object {@} the same as input
. or [] child operator {.kind} or {['kind']} List
.. recursive descent {..name} 127.0.0.1 127.0.0.2 myself e2e
* wildcard. Get all objects {.items[*].metadata.name} [127.0.0.1 127.0.0.2]
[start:end :step] subscript operator {.users[0].name} myself
[,] union operator {.items[*]['metadata.name', 'status.capacity']} 127.0.0.1 127.0.0.2 map[cpu:4] map[cpu:8]
?() filter {.users[?(@.name=="e2e")].user.password} secret
range, end iterate list {range .items[*]}[{.metadata.name}, {.status.capacity}] {end} [127.0.0.1, map[cpu:4]] [127.0.0.2, map[cpu:8]]
'' quote interpreted string {range .items[*]}{.metadata.name}{'\t'}{end} 127.0.0.1 127.0.0.2

Below are some examples using jsonpath:

$ kubectl get pods -o json
$ kubectl get pods -o=jsonpath='{@}'
$ kubectl get pods -o=jsonpath='{.items[0]}'
$ kubectl get pods -o=jsonpath='{.items[0].metadata.name}'
$ kubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.startTime}{"\n"}{end}'

Resource limit

CPU

The CPU resource is measured in cpu units. One cpu, in Kubernetes, is equivalent to:

  • 1 AWS vCPU
  • 1 GCP Core
  • 1 Azure vCore
  • 1 Hyperthread on a bare-metal Intel processor with Hyperthreading

Memory

The memory resource is measured in bytes. You can express memory as a plain integer or a fixed-point integer with one of these suffixes: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki. For example, the following represent approximately the same value:

128974848, 129e6, 129M , 123Mi

Chapter 13. Integrating storage solutions and Kubernetes

  • External service without selector (access with external-database.svc.default.cluster endpoint)
kind: Service
apiVersion: v1
metadata:
  name: external-database
spec:
  type: ExternalName
  externalName: "database.company.com
  • external service with IP only
kind: Service
apiVersion: v1
metadata:
  name: external-ip-database
---
kind: Endpoints
apiVersion: v1
metadata:
  name: external-ip-database
subsets:
  - addresses:
    - ip: 192.168.0.1
    ports:
    - port: 3306

Downward API

The following information is available to containers through environment variables and downwardAPI volumes:

Information available via fieldRef:

  • spec.nodeName - the node’s name
  • status.hostIP - the node’s IP
  • metadata.name - the pod’s name
  • metadata.namespace - the pod’s namespace
  • status.podIP - the pod’s IP address
  • spec.serviceAccountName - the pod’s service account name
  • metadata.uid - the pod’s UID
  • metadata.labels[''] - the value of the pod’s label (for example, metadata.labels['mylabel']); available in Kubernetes 1.9+
  • metadata.annotations[''] - the value of the pod’s annotation (for example, metadata.annotations['myannotation']); available in Kubernetes 1.9+
  • Information available via resourceFieldRef:
  • A Container’s CPU limit
  • A Container’s CPU request
  • A Container’s memory limit
  • A Container’s memory request

In addition, the following information is available through downwardAPI volume fieldRef:

  • metadata.labels - all of the pod’s labels, formatted as label-key="escaped-label-value" with one label per line
  • metadata.annotations - all of the pod’s annotations, formatted as annotation-key="escaped-annotation-value" with one annotation per line

Labs

Guaranteed Scheduling For Critical Add-On Pods

See link

  • Marking pod as critical when using Rescheduler. To be considered critical, the pod has to:
    • Run in the kube-system namespace (configurable via flag)
    • Have the scheduler.alpha.kubernetes.io/critical-pod annotation set to empty string
    • Have the PodSpec’s tolerations field set to [{"key":"CriticalAddonsOnly", "operator":"Exists"}].

The first one marks a pod a critical. The second one is required by Rescheduler algorithm.

  • Marking pod as critical when priorites are enabled. To be considered critical, the pod has to:
    • Run in the kube-system namespace (configurable via flag)
    • Have the priorityClass set as system-cluster-critical or system-node-critical, the latter being the highest for entire cluster
    • scheduler.alpha.kubernetes.io/critical-pod annotation set to empty string(This will be deprecated too).

Set command or arguments via env

env:
- name: MESSAGE
  value: "hello world"
command: ["/bin/echo"]
args: ["$(MESSAGE)"]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment