Skip to content

Instantly share code, notes, and snippets.

@TonyPythoneer
Last active July 19, 2024 19:45
Show Gist options
  • Save TonyPythoneer/2c7b558ddf2e3f124c56c6f2201ebbd2 to your computer and use it in GitHub Desktop.
Save TonyPythoneer/2c7b558ddf2e3f124c56c6f2201ebbd2 to your computer and use it in GitHub Desktop.
CKAD Lightning Lab - Part 1
"""
Create a Persistent Volume called log-volume. It should make use of a storage class name manual. It should use RWX as the access mode and have a size of 1Gi. The volume should use the hostPath /opt/volume/nginx
Next, create a PVC called log-claim requesting a minimum of 200Mi of storage. This PVC should bind to log-volume.
Mount this in a pod called logger at the location /var/www/nginx. This pod should use the image nginx:alpine.
"""
apiVersion: v1
kind: PersistentVolume
metadata:
name: log-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/opt/volume/nginx"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: log-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 200Mi
---
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: logger
name: logger
spec:
containers:
- image: nginx:alpine
name: logger
resources: {}
volumeMounts:
- mountPath: /var/www/nginx
name: mydir
dnsPolicy: ClusterFirst
restartPolicy: Always
volumes:
- name: mydir
persistentVolumeClaim:
claimName: log-claim
status: {}
"""
We have deployed a new pod called secure-pod and a service called secure-service. Incoming or Outgoing connections to this pod are not working.
Troubleshoot why this is happening.
Make sure that incoming connection from the pod webapp-color are successful.
Important: Don't delete any current objects deployed.
"""
"""
cli:
kubectl get netpol default-deny -o yaml > netpol-backup.yaml
kubectl get po --show-labels // check label
kubectl exec -it webapp-color -- nc -z -v -w 1 secure-service 80 // check traffic
"""
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
creationTimestamp: "2021-03-31T07:56:38Z"
generation: 1
managedFields:
- apiVersion: networking.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:policyTypes: {}
manager: kubectl-create
operation: Update
time: "2021-03-31T07:56:38Z"
name: allow
namespace: default
resourceVersion: "7057"
selfLink: /apis/networking.k8s.io/v1/namespaces/default/networkpolicies/default-deny
uid: 7387f099-43a3-4936-bdb6-d37bbe8746f9
spec:
podSelector:
matchLabels:
run: secure-pod
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
name: webapp-color
"""
Create a pod called time-check in the dvl1987 namespace. This pod should run a container called time-check that uses the busybox image.
Create a config map called time-config with the data TIME_FREQ=10 in the same namespace.
The time-check container should run the command: while true; do date; sleep $TIME_FREQ;done and write the result to the location /opt/time/time-check.log.
The path /opt/time on the pod should mount a volume that lasts the lifetime of this pod.
"""
"""
cli:
kubectl create ns dvl1987
kubectl create configmap time-config --from-literal=TIME_FREQ=10 -n dvl1987
check:
kubectl log time-check -n dvl1987
kubectl exec time-check -n dvl1987 -- cat /opt/time/time-check.log // check log
kubectl -n dvl1987 exec -it time-check -- env | grep TIME_FREQ // check env
"""
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: time-check
name: time-check
namespace: dvl1987
spec:
volumes:
- name: vol
emptyDir: {}
containers:
- image: busybox
name: time-check
resources: {}
volumeMounts:
- mountPath: /opt/time
name: vol
command: ["/bin/sh"]
args: ["-c", "while true; do date; sleep $TIME_FREQ;done > /opt/time/time-check.log"]
envFrom:
- configMapRef:
name: time-config
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
"""
Create a new deployment called nginx-deploy, with one signle container called nginx, image nginx:1.16 and 4 replicas. The deployment should use RollingUpdate strategy with maxSurge=1, and maxUnavailable=2.
Next upgrade the deployment to version 1.17 using rolling update.
Finally, once all pods are updated, undo the update and go back to the previous version.
"""
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: nginx-deploy
name: nginx-deploy
spec:
replicas: 4
selector:
matchLabels:
app: nginx-deploy
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 2
template:
metadata:
creationTimestamp: null
labels:
app: nginx-deploy
spec:
containers:
- image: nginx:1.16
name: nginx
resources: {}
status: {}
"""
cli:
kubectl set image deployment/nginx-deploy nginx=nginx:1.17
kubectl rollout undo deployment/nginx-deploy
kubectl describe deployment nginx-deploy
"""
"""
Create a redis deployment with the following parameters:
Name of the deployment should be redis using the redis:alpine image. It should have exactly 1 replica.
The container should request for .2 CPU. It should use the label app=redis.
It should mount exactly 2 volumes:
Make sure that the pod is scheduled on master/controlplane node.
a. An Empty directory volume called data at path /redis-master-data.
b. A configmap volume called redis-config at path /redis-master.
c.The container should expose the port 6379.
"""
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: redis
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: redis
spec:
nodeName: controlplane
volumes:
- name: redis-config
configMap:
name: redis-config
- name: data
emptyDir: {}
containers:
- image: redis:alpine
name: redis
ports:
- containerPort: 6379
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /redis-master
name: redis-config
resources:
requests:
cpu: "0.2"
status: {}
@arslanali363
Copy link

Great work, helpful. thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment