Skip to content

Instantly share code, notes, and snippets.

@saiyam1814
Last active December 4, 2022 15:56
Show Gist options
  • Star 29 You must be signed in to star a gist
  • Fork 26 You must be signed in to fork a gist
  • Save saiyam1814/b1b82ec38ef65883eccef0342e978d72 to your computer and use it in GitHub Desktop.
Save saiyam1814/b1b82ec38ef65883eccef0342e978d72 to your computer and use it in GitHub Desktop.
CKS - Problems
PROBLEM 1
===============================================================
https://www.katacoda.com/courses/ubuntu/playground
curl -sfL https://get.k3s.io | sh -
kubectl run demo1 --image=nginx
kubectl create ns dev
kubectl run demo2 --image=nginx -n dev
kubectl get pods -owide
kubectl exec demo2 -n dev -- curl 10.42.0.6
NP:
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-egress
namespace: dev
spec:
podSelector: {}
policyTypes:
- Egress
kubectl exec demo2 -n dev -- curl 10.42.0.6
===============================================================
PROBLEM 2
===============================================================
$ kubectl run demo1 --image=nginx
pod/demo1 created
$ kubectl create ns red
namespace/red created
$ kubectl run demo2 --image=nginx -n red
pod/demo2 created
policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: red
spec:
podSelector:
matchLabels:
run: demo2
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
ns: default
- podSelector:
matchLabels:
demo: test
kubectl exec -it demo1 -- curl 10.42.0.9 (ip of demo2 in red namespace)
kubectl label ns default ns=default
namespace/default labeled
kubectl exec -it demo1 -- curl 10.42.0.9 (ip of demo2 in red namespace)
kubectl run demo3 --image=nginx -n red
kubectl exec -it demo3 -n red -- curl 10.42.0.9
kubectl label pod demo3 demo=test -n red
kubectl exec -it demo3 -n red -- curl 10.42.0.9
===============================================================
PROBLEM 3
===============================================================
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: deny
annotations:
container.apparmor.security.beta.kubernetes.io/deny: localhost/deny_write
spec:
containers:
- name: deny
image: busybox
command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ]
Apparmor profile
#include <tunables/global>
profile deny_write flags=(attach_disconnected) {
#include <abstractions/base>
file,
# Deny all file writes.
deny /** w,
commands:
clear
cat /sys/module/apparmor/parameters/enabled
aa-status | grep deny
apparmor_parser -q aa
aa-status | grep deny
kubectl create -f pod.yaml
kubectl get pods
kubectl exec deny -- touch /tmp/saiyam
kubectl exec deny -- cat /proc/1/attr/current
===============================================================
PROBLEM 4
===============================================================
curl -sfL https://get.k3s.io | sh -
kubectl create ns demo
kubectl create sa sam -n demo
kubectl create clusterrole cr --verb=get,list,watch,delete --resource=secrets,pods,deployments
kubectl create rolebinding super --serviceaccount=demo:sam -n demo --clusterrole=cr
kubectl run demo --image=nginx --serviceaccount=sam -n demo
kubectl edit clusterrole cr
kubectl create role rr --verb=create --resource=deployments -n demo
kubectl create rolebinding limited --serviceaccount=demo:sam --role=rr -n demo
TOKEN=$(kubectl describe secrets "$(kubectl describe serviceaccount sam -n demo| grep -i Tokens | awk '{print $2}')" -n demo | grep token: | awk '{print $2}')
kubectl config set-credentials test-user --token=$TOKEN
kubectl config set-context demo --cluster=default --user=test-user
kubectl config use-context demo
===============================================================
PROBLEM 5
===============================================================
curl -sfL https://get.k3s.io | sh -
curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh |sh -s -- -b /usr/local/bin
kubectl run p1 --image=nginx
kubectl run p2 --image=httpd
kubectl run p3 --image=alpine -- sleep 1000
kubectl get pods -o=jsonpath='{range.items[*]}{"\n"}{.metadata.name}{":\t"}{range.spec.containers[*]}{.image}{", "}{end}{end}' | sort
trivy image --severity HIGH,CRITICAL nginx
trivy image --severity HIGH,CRITICAL httpd
trivy image --severity HIGH,CRITICAL alpine
echo p1 $'\n'p2 > /opt/badimages.txt
===============================================================
PROBLEM 6
===============================================================
spec:
containers:
- --audit-policy-file=/etc/kubernetes/audit/policy.yaml
- --audit-log-path=/etc/kubernetes/audit/logs/audit.log
- --audit-log-maxsize=3
- --audit-log-maxbackup=2
volumes:
- name: audit-log
hostPath:
path: /etc/kubernetes/audit/logs/audit.log
type: FileOrCreate
- name: audit
hostPath:
path: /etc/kubernetes/audit/policy.yaml
type: File
volumeMounts:
- mountPath: /etc/kubernetes/audit/policy.yaml
name: audit
readOnly: true
- mountPath: /etc/kubernetes/audit/logs/audit.log
name: audit-log
readOnly: false
Copy Policy from the documentation - https://kubernetes.io/docs/tasks/debug-application-cluster/audit/#audit-policy
apiVersion: audit.k8s.io/v1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
- "RequestReceived"
rules:
# Log pod changes at RequestResponse level
- level: RequestResponse
resources:
- group: ""
# Resource "pods" doesn't match requests to any subresource of pods,
# which is consistent with the RBAC policy.
resources: ["pods"]
# Log "pods/log", "pods/status" at Metadata level
- level: Metadata
resources:
- group: ""
resources: ["pods/log", "pods/status"]
# Don't log requests to a configmap called "controller-leader"
- level: None
resources:
- group: ""
resources: ["configmaps"]
resourceNames: ["controller-leader"]
# Don't log watch requests by the "system:kube-proxy" on endpoints or services
- level: None
users: ["system:kube-proxy"]
verbs: ["watch"]
resources:
- group: "" # core API group
resources: ["endpoints", "services"]
# Don't log authenticated requests to certain non-resource URL paths.
- level: None
userGroups: ["system:authenticated"]
nonResourceURLs:
- "/api*" # Wildcard matching.
- "/version"
# Log the request body of configmap changes in kube-system.
- level: Request
resources:
- group: "" # core API group
resources: ["configmaps"]
# This rule only applies to resources in the "kube-system" namespace.
# The empty string "" can be used to select non-namespaced resources.
namespaces: ["kube-system"]
# Log configmap and secret changes in all other namespaces at the Metadata level.
- level: Metadata
resources:
- group: "" # core API group
resources: ["secrets", "configmaps"]
# Log all other resources in core and extensions at the Request level.
- level: Request
resources:
- group: "" # core API group
- group: "extensions" # Version of group should NOT be included.
# A catch-all rule to log all other requests at the Metadata level.
- level: Metadata
# Long-running requests like watches that fall under this rule will not
# generate an audit event in RequestReceived.
omitStages:
- "RequestReceived"
---------
- level: RequestResponse
resources:
- group: ""
resources: ["deployments"]
===============================================================
PROBLEM 7
===============================================================
kubectl drain controlplane --ignore-daemonsets
apt-get install kubeadm=1.19.0-00
kubeadm version
kubeadm upgrade apply v1.19.0
apt-get install kubelet=1.19.0-00 kubectl=1.19.0-00
sudo systemctl daemon-reload
sudo systemctl restart kubelet
kubectl uncordon controlplane
kubectl drain node01 --ignore-daemonsets
apt-get update && apt-get install -y kubeadm=1.19.0-00
kubeadm upgrade node
apt-get install -y kubelet=1.19.0-00 kubectl=1.19.0-00
sudo systemctl daemon-reload
sudo systemctl restart kubelet
kubectl uncordon node01
===============================================================
PROBLEM 8
===============================================================
docker run --pid=host -v /etc:/etc:ro -v /var:/var:ro -v $(which kubectl):/usr/local/mount-from-host/bin/kubectl -v ~/.kube:/.kube -e KUBECONFIG=/.kube/config -t aquasec/kube-bench:latest run
===============================================================
PROBLEM 9
===============================================================
vi /etc/containerd/config.toml
[plugins.cri.containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
#RuntimeClass
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: demo
handler: runc
#Pod
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
runtimeClassName: demo
containers:
- name: test
image: nginx
===============================================================
PROBLEM 10
===============================================================
https://falco.org/docs/rules/supported-fields/
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
helm install falco falcosecurity/falco
kubectl get pods
kubectl get cm
curl -s https://falco.org/repo/falcosecurity-3672BA8F.asc | apt-key add -
echo "deb https://download.falco.org/packages/deb stable main" | tee -a /etc/apt/sources.list.d/falcosecurity.list
apt-get update -y
apt-get -y install linux-headers-$(uname -r)
apt-get install -y falco
docker run --name ubuntu_bash --rm -i -t ubuntu bash
cat /var/log/syslog | grep falco | grep ubuntu
output: "[%evt.time][%container.id] [%container.name]"
docker run --name ubuntu_bash --rm -i -t ubuntu bash
echo "[04:07:39.036002997][46c66499e2ec] [ubuntu_bash]" >> /tmp/falcologs
===============================================================
PROBLEM 11
===============================================================
kubectl create secret generic database --from-literal=username=sammy --from-literal=password=demo123
apiVersion: v1
kind: Pod
metadata:
name: demo
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: sec-vol
mountPath: "/etc/sec"
readOnly: true
volumes:
- name: sec-vol
secret:
secretName: database
kubectl exec demo -- ls /etc/sec/
kubectl exec demo -- cat /etc/sec/username
kubectl exec demo -- cat /etc/sec/password
kubectl get secret database -oyaml
echo "ZGVtbzEyMw==" | base64 -d
echo demo123 >> /tmp/sec
echo "c2FtbXk=" | base64 -d
echo sammy >> /tmp/sec
cat /tmp/sec
kubectl exec demo -- cat /etc/sec/username >> /tmp/sec
kubectl exec demo -- cat /etc/sec/password >> /tmp/sec
echo "$(kubectl exec -it demo -- cat /run/secrets/kubernetes.io/serviceaccount/token)" > /tmp/token
===============================================================
PROBLEM 12
===============================================================
PSP below
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: demo
spec:
privileged: false # Don't allow privileged pods!
# The rest fills in some required fields.
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
runAsUser:
rule: RunAsAny
fsGroup:
rule: RunAsAny
volumes:
- '*'
test privileged pod
apiVersion: v1
kind: Pod
metadata:
name: test-pod-1
namespace: default
spec:
containers:
- name: centos
image: centos
command: ['sh', '-c', 'sleep 999']
securityContext:
privileged: true
kubectl create ns test
kubectl create sa sam -n test
kubectl create role psp --verb=use --resource=podsecuritypolicy --resource-name=demo -n test
kubectl create -n test rolebinding psp-binding --role=psp --serviceaccount=test:sam
===============================================================
PROBLEM 13
===============================================================
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
containers:
- name: nginx
image: nginx
securityContext:
readOnlyRootFilesystem: true
volumeMounts:
- name: run
mountPath: /var/run
- name: log
mountPath: /var/log/nginx
- name: cache
mountPath: /var/cache/nginx
volumes:
- name: run
emptyDir: {}
- name: log
emptyDir: {}
- name: cache
emptyDir: {}
kubectl exec security-context-demo -- touch /etc/test
@saiyam1814
Copy link
Author

PROBLEM 1 is missing namespace: dev for NetworkPolicy. If namespace is not specified, even after applying policy, traffic is allowed.

thanks, updated

@saiyam1814
Copy link
Author

PROBEM6 : should be (under -command:)

spec:
  containers:
  - command:
    - kube-apiserver
    - --audit-policy-file=/etc/kubernetes/audit/policy.yaml
    - --audit-log-path=/etc/kubernetes/audit/logs/audit.log
    - --audit-log-maxsize=3
    - --audit-log-maxbackup=2

Thank you for these, will get this added. This gist however is just a helper , all the problems and solutions from the book should work as expected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment