Skip to content

Instantly share code, notes, and snippets.

@Millward2000
Last active April 5, 2023 07:21
Show Gist options
  • Save Millward2000/0cbce702f877f872adfca1ccfd642793 to your computer and use it in GitHub Desktop.
Save Millward2000/0cbce702f877f872adfca1ccfd642793 to your computer and use it in GitHub Desktop.
Notes
-----
Below is a rough draft of the demos that were used during the course - please feel free to use and modify as you wish in a non-production account
Demo 1 - Create a cluster and provision the Cluster AutoScaler
------
1. Fire up an EC2 Instance
- attach a role to it
2. Install kubectl
3. Install eksctl
4. Create a cluster
eksctl create cluster demo --without-nodegroup --oidc
--------------------------------------------------------
eksctl get iamidentitymapping --cluster demo --region=af-south-1
eksctl create iamidentitymapping \
--cluster demo \
--region=af-south-1 \
--arn arn:aws:iam::111122223333:user/<ADMIN_USER_NAME> \
--group system:masters \
--no-duplicate-arns
=======================================================================================================================
Demo 2 - Create a local container image - create a local helm chart - stage it in ECR - deploy it to the cluster
----------------------------------------------------------------------------------------------------------------
local container image
docker run -it --net host nicolaka/netshoot
create ecr repository
aws ecr create-repository --repository-name helmdemo
authenticate ecr repo
aws ecr get-login-password --region af-south-1 | docker login --username AWS --password-stdin 111122223333.dkr.ecr.af-south-1.amazonaws.com
tag the netconfig image
docker tag nicolaka/netshoot 111122223333.dkr.ecr.af-south-1.amazonaws.com/helmdemo:netshoot-latest
push the netconfig image
docker push 111122223333.dkr.ecr.af-south-1.amazonaws.com/helmdemo:netshoot-latest
test the image
kubectl run tmp-shell --rm -i --tty --image 111122223333.dkr.ecr.af-south-1.amazonaws.com/helmdemo:netshoot-latest -- /bin/bash
------
install helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
source <(helm completion bash)
helm completion bash > /etc/bash_completion.d/helm
add the argo repository
https://artifacthub.io/packages/helm/argo/argo-cd
OR
helm search hub argo-cd
helm repo add argo https://argoproj.github.io/argo-helm
search the repository
helm search repo argo/argo-cd --version ^4.0.0
helm search repo argo/argo-cd --version ^5.0.0
------
install the previous argo-cd release
helm install my-argo-cd argo/argo-cd --version 4.10.9
get the secret(admin password)
kubectl -n default get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
login and change the password
kubectl port-forward service/my-argo-cd-argocd-server -n default 8080:443
create a new project
gitopsrox
upgrade the argo-cd to the newest release
helm upgrade my-argo-cd argo/argo-cd --version 5.5.16
helm
login and verify that the password is retained/project is retained
uninstall the release
helm list
helm uninstall my-argo-cd
remove the repo
helm repo remove argo
-----
create a chart
helm create helmdemo
code helmdemo/ -n
delete the files in templates/
create a new file named configmap.yaml in templates/ with the following content:
apiVersion: v1
kind: ConfigMap
metadata:
name: helmdemo-configmap
data:
myvalue: "EKS is awesome!"
package the helm chart
helm package helmdemo
authenticate helm to ecr
aws ecr get-login-password \
--region af-south-1 | helm registry login \
--username AWS \
--password-stdin 111122223333.dkr.ecr.af-south-1.amazonaws.com
push the chart to ecr
helm push helmdemo-0.1.0.tgz oci://111122223333.dkr.ecr.af-south-1.amazonaws.com/
Deploy from custom helm repo in ECR
helm install configmaptest oci://111122223333.dkr.ecr.af-south-1.amazonaws.com/helmdemo --version 0.1.0
===========================================================================================================
Demo 3 - ADOT with AMP (deploy in eu-west-1)
------
install certmanager
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.9.1 \
--set installCRDs=true
install the opentelemetry operator
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm repo update
helm install my-operator open-telemetry/opentelemetry-operator
===============================================================================
Demo - Horizontal Pod Autoscaler
-------------------
install older helm (use helm 3.8.2)
curl -L https://git.io/get_helm.sh | bash -s -- --version v3.8.2
install kubeopsview
helm repo add christianknell https://christianknell.github.io/helm-charts/
helm install my-kube-ops-view christianknell/kube-ops-view --version 1.1.7
port-forward
kubectl port-forward service/my-kube-ops-view -n default 8080:80
install metrics server
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.5.0/components.yaml
enable the hpa
kubectl apply -f https://k8s.io/examples/application/php-apache.yaml
kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
generate load
kubectl run -i --tty load-generator --image=busybox /bin/sh
while true; do wget -q -O - http://php-apache; done
monitor the hpa scaling
kubectl get hpa -w
-----------------------
Demo - Cluster AutoScaler
------------------
Increase the Managed Node Group settings to max instances 8
eksctl scale nodegroup --cluster=demo --nodes-max 8 --name=ng-1d9b8445
Create the required service account
eksctl create iamserviceaccount \
--name cluster-autoscaler \
--namespace kube-system \
--cluster demo \
--attach-policy-arn "arn:aws:iam::111122223333:policy/AmazonEKSClusterAutoscalerPolicy" \
--approve \
--override-existing-serviceaccounts
Deploy the Cluster AutoScaler
kubectl apply -f https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml
Edit the deployment - change the <YOUR CLUSTER NAME HERE> to demo
kubectl edit deployment -n kube-system cluster-autoscaler
Create a deployment
cat <<EoF> nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-to-scaleout
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
service: nginx
app: nginx
spec:
containers:
- image: nginx
name: nginx-to-scaleout
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 500m
memory: 512Mi
EoF
kubectl apply -f nginx.yaml
Scale the Deployment
kubectl scale --replicas=10 deployment/nginx-to-scaleout
Verify the auto scaling event in the kube-ops-viewer
-----------------------------------------------------------------
Demo - Custom Role mapped to a binding(with a group name) and iamidentity
--------
Create ClusterRoleBinding
kubectl create clusterrolebinding devbinding --clusterrole=view --group=devs
Create a new IAM user
aws iam create-user --user-name faf
aws iam attach-user-policy --user-name faf --policy-arn arn:aws:iam::111122223333:policy/eksReadOnly
aws iam create-access-key --user-name faf
(In a new terminal)
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export AWS_DEFAULT_REGION=af-south-1
aws sts get-caller-identity
Create eksctl iammapping
eksctl create iamidentitymapping \
--cluster demo \
--region=af-south-1 \
--arn arn:aws:iam::111122223333:user/faf \
--group developers \
--no-duplicate-arns
aws iam delete-access-key --user-name faf --access-key-id ....
aws iam delete-user --user-name faf
eksctl delete iamidentitymapping --cluster demo --arn arn:aws:iam::111122223333:user/faf
-------------------------------------------------
2048 GAME
---------
Download
curl -o 2048_full.yaml https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.4/docs/examples/2048/2048_full.yaml
Edit - remove the ingress entry - change the service type to ClusterIP
cluster-ip
enable port-forwarding
kubectl port-forward -n game-2048 service/service-2048 8000:80
change the cluster-ip to NodePort (make sure you open up the required ports on the SG)
kubectl edit svc -n game-2048 service-2048
kubectl get svc --namespace game-2048
loadbalancer
kubectl edit svc -n game-2048 service-2048
cleanup
kubect delete ns game-2048
===========
Now with the aws load-balancer controller
create the service account
eksctl create iamserviceaccount \
--cluster=demo \
--namespace=kube-system \
--name=aws-load-balancer-controller \
--role-name AmazonEKSLoadBalancerControllerRole \
--attach-policy-arn=arn:aws:iam::111122223333:policy/AWSLoadBalancerControllerIAMPolicy \
--approve
install aws load-balancer controller
helm repo add eks https://aws.github.io/eks-charts
helm repo update
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=demo \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller
deploy the sample app
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.4/docs/examples/2048/2048_full.yaml
alb
update to nlb (delete ingress and change the service type to load-balancer)
kubectl edit svc -n game-2048 service-2048
annotations:
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment