You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
kubectl get pods -l app=my-app
kubectl get pods -l environment=production
kubectl get pods -l environment=development
kubectl get pods -l environment!=production
kubectl get pods -l 'environment in (development,production)'
kubectl get pods -l app=my-app,environment=production
Helm/Tiller commands and examples
# get current values
helm get values prometheus-operator
#Create a deployment with a record (for rollbacks):
kubectl create -f test-deployment.yaml --record
#Check the status of the rollout:
kubectl rollout status deployments test#View the ReplicaSets in your cluster:
kubectl get replicasets
#Scale up your deployment by adding more replicas:
kubectl scale deployment test --replicas=5
#Expose the deployment and provide it a service:
kubectl expose deployment test --port 80 --target-port 80 --type NodePort
#Set the minReadySeconds attribute to your deployment:
kubectl patch deployment test -p '{"spec": {"minReadySeconds": 10}}'#Use kubectl apply to update a deployment:
kubectl apply -f test-deployment.yaml
#Use kubectl replace to replace an existing deployment:
kubectl replace -f test-deployment.yaml
#Run this curl look while the update happens:whiletrue;do curl http://10.x.x.x;done#Perform the rolling update:
kubectl set image deployments/test app=idontexist/test:v2 --v 6
#Describe a certain ReplicaSet:
kubectl describe replicasets test-[hash]
#Apply the rolling update to version 3 (buggy):
kubectl set image deployment test app=idontexist/test:v3
#Undo the rollout and roll back to the previous version:
kubectl rollout undo deployments test#Look at the rollout history:
kubectl rollout history deployment test#Roll back to a certain revision:
kubectl rollout undo deployment test --to-revision=2
#Pause the rollout in the middle of a rolling update (canary release):
kubectl rollout pause deployment test#Resume the rollout after the rolling update looks good:
kubectl rollout resume deployment test# Redeploy in place Deployment or Daemonset:
kubectl -n rook-ceph patch daemonset rook-ceph-agent -p "{\"spec\": {\"template\": {\"metadata\": { \"labels\": { \"redeploy\": \"$(date +%s)\"}}}}}"
Port-Forward
# pod
kubectl -n grafana port-forward rsc-grafana-84fb5c8b99-grdf2 3000
apiVersion: v1kind: ConfigMapmetadata:
name: your-config-mapdata:
config.cfg: |- # copy and paste your config file - check indentation value1=100000000 value2=222 some other raw text here
Configmap as file (mount volume)
apiVersion: v1kind: Podmetadata:
name: my-configmap-volume-podspec:
containers:
- name: myapp-containerimage: myappvolumeMounts:
- name: config-volumemountPath: /etc/settings # this will mount inside the pod /etc/settings/config.cfgvolumes:
- name: config-volumeconfigMap:
name: your-config-map
apiVersion: batch/v1kind: Jobmetadata:
name: pispec:
template:
spec:
containers:
- name: piimage: perlcommand: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]restartPolicy: NeverbackoffLimit: 4# how many times it will fail before giving up
kubectl run nginx --image=nginx --replicas=2
# Create a service for the deployment:
kubectl expose deployment nginx --port=80
#Attempt to access the service by using a busybox interactive pod:
kubectl run busybox --rm -it --image=busybox /bin/sh
wget --spider --timeout=1 nginx
Pod to Pod communication (eg. webserver <-> database)
# as per yaml above, label a pod to get the network policy# and it will only accept traffic from pods labeled "web"# for traffic coming on port 5432
kubectl label pods [pod_name] app=db
Network Policy based on namespace:
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:
name: ns-netpolicyspec:
podSelector:
matchLabels:
app: dbingress:
- from:
- namespaceSelector:
matchLabels:
tenant: web # this is a label you set in the namespace metadataports:
- port: 5432
IP block and Egress
#IP block NetworkPolicy:apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:
name: ipblock-netpolicyspec:
podSelector:
matchLabels:
app: dbingress:
- from:
- ipBlock: # allow connections from this IP blockcidr: 192.168.1.0/24
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:
name: my-network-policyspec:
podSelector:
matchLabels:
app: secure-app #<= this policy will be applied to pods having the label "app: secure-app"policyTypes:
- Ingress
- Egressingress:
- from:
- podSelector:
matchLabels:
allow-access: "true"#<= traffic from pods with the label allow-access: "true"ports:
- protocol: TCP #<= on TCP 80port: 80egress: # same for egress, traffic is allowed to port 80 for pods with label allow-access: "true"
- to:
- podSelector:
matchLabels:
allow-access: "true"ports:
- protocol: TCPport: 80
# Run an alpine container with default security:
kubectl run pod-with-defaults --image alpine --restart Never -- /bin/sleep 999999
# Check the ID on the container:
kubectl exec pod-with-defaults id
# Create a pod that runs the container as user:
kubectl apply -f alpine-user-context.yaml
#View the IDs of the new pod created with container user permission:
kubectl exec alpine-user-context id
kubectl apply -f privileged-pod.yaml
# View the devices on the default container:
kubectl exec -it pod-with-defaults ls /dev
#View the devices on the privileged pod container:
kubectl exec -it privileged-pod ls /dev
#Try to change the time on a default container pod:
kubectl exec -it pod-with-defaults -- date +%T -s "12:00:00"
Create the pod that will allow you to change the container’s time:
kubectl run -f kernelchange-pod.yaml
#Change the time on a container:
kubectl exec -it kernelchange-pod -- date +%T -s "12:00:00"#View the date on the container:
kubectl exec -it kernelchange-pod -- date
Create a pod that’s container has capabilities removed:
kubectl apply -f remove-capabilities.yaml
#Try to change the ownership of a container with removed capability:
kubectl exec remove-capabilities chown guest /tmp
Pod container that can’t write to the local filesystem:
Create a pod that will not allow you to write to the local container filesystem:
kubectl apply -f readonly-pod.yaml
#Try to write to the container filesystem:
kubectl exec -it readonly-pod touch /new-file
#Create a file on the volume mounted to the container:
kubectl exec -it readonly-pod touch /volume/newfile
#View the file on the volume that’s mounted:
kubectl exec -it readonly-pod -- ls -la /volume/newfile
Pod that has different group permissions for different containers: