Skip to content

Instantly share code, notes, and snippets.

View bikram20's full-sized avatar

Bikram Gupta bikram20

View GitHub Profile
@bikram20
bikram20 / gist:4f4dbbaf5fcc874d5daee2e3b780d919
Last active November 3, 2023 23:46
Self-install kubernetes dashboard on DOKS using helm3
# Requires that you have helm3 installed on your local machine and cluster is accessible (kubeconfig).
# You do NOT need the following instructions, if you are comfortable using helm!!
# Reference: https://github.com/kubernetes/dashboard/releases/tag/v3.0.0-alpha0
# Helm Instructions: https://artifacthub.io/packages/helm/k8s-dashboard/kubernetes-dashboard
# Dashboard 3.0 alpha version is supported on k8s 1.27 only. However we will go ahead and install on 1.28.
# Verify your DOKS k8s version
@bikram20
bikram20 / gist:64a9da19b12b0d1e2b5dcb7916d37854
Last active March 13, 2023 04:42
Velero-demo-kubernetes
This is an e2e demonstration of getting Velero to do a backup/restore in DigitalOcean kubernetes. This will not work on other providers because we install Digitalocean specific provider for velero. Likewise, we require Spaces as the backup destination in the first release. Backblaze may work, if it is similar to Spaces and no complex permissions.
In v1, we will only support DO Kubernetes, with destinations being Spaces and DO Volumes.
Commands needed: kubectl, velero
Credentials needed: S3/Spaces (for velero to save backups), DO Cloud API (for velero to take volume snapshots from k8s), Kubeconfig (for velero to access k8s cluster)
This diagram shows how velero works: https://github.com/digitalocean/Kubernetes-Starter-Kit-Developers/blob/main/05-setup-backup-restore/assets/images/velero_bk_res_wf.png
@bikram20
bikram20 / gist:f8394e222612961fd4c689f4d77ebd47
Last active February 26, 2023 17:54
DigitalOcean Docker 1-click Test App Setup
# Install a Docker 1-click droplet. Choose at least 4cpu/8gb, as we will run multiple applications
# Create a non-root user
sudo adduser ubuntu --disabled-password
sudo usermod -aG docker ubuntu
cp -r /root/.docker /home/ubuntu/
chown -R ubuntu:ubuntu /home/ubuntu/.docker
su - ubuntu
#################################################################################
@bikram20
bikram20 / Droplet-DOKS Internal communication
Last active May 18, 2021 06:12
Droplet-DOKS Internal communication:
Problem:
When you expose a service as a LB in DO kubernetes, the service gets a public IP. There is no internal LB. So when you need an application on a droplet (in the same VPC as the cluster) needing to communicate with a service in the cluster, the traffic traverses over public network.
You may like to instead keep that droplet --- cluster traffic inside the VPC itself.
Solution:
It is done by exposing the service as a NodePort (not LB), and making the firewall unmanaged for the nodeport (through an annotation).
This ensures that the service is ONLY accessible over Nodeport in the private VPC network. In the 2nd step, we use the external-DNS to program the Nodeport to a FQDN in DO DNS. As the droplets in the VPC rely on DO DNS, they get the updates for the NodePort through DNS.
@bikram20
bikram20 / DOKS-Cluster-Network-Analysis-from-Outside
Created May 6, 2021 05:45
DOKS worker nodes accessibility from Internet
# Use "brew install nmap" or any other way to get nmap.
# Have doctl and kubectl configured.
#
bgupta@C02CC1EGMD6M employeeapp % doctl compute droplet list --tag-name 'k8s' --format 'Name'
Name
pool-4fz85fgrm-8rqqw
pool-4fz85fgrm-8yde3
bgupta@C02CC1EGMD6M employeeapp % doctl compute droplet list --tag-name 'k8s' --format 'Name','PublicIPv4'
# setup.sh
!/usr/bin/bash
kind create cluster --name cluster1 --config cluster1.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
kubectl create clusterrolebinding default-admin --clusterrole cluster-admin --serviceaccount=default:default
token=$(kubectl get secrets -o jsonpath="{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='default')].data.token}"|base64 --decode)
echo "Cluster1 token:" $token
echo $token > cluster1.token
# Excellent site - https://openshift.tips/
# Tail 10 lines for journal log of the node. Various options.
oc adm node-log <nodename> --tail=10
oc adm node-logs ip-10-0-137-133.us-west-2.compute.internal --tail=10 -u kubelet.service
oc adm node-logs ip-10-0-137-133.us-west-2.compute.internal --tail=10 --grep=calico
# Path is w.r.t. /var/log
oc adm node-logs ip-10-0-137-133.us-west-2.compute.internal --tail=10 --path=calico/log
@bikram20
bikram20 / gist:89ce7b6179ff901342daf685325470d9
Last active January 3, 2020 00:17
tracing kubenetes data path in iptable chains
# To create pods and policies
LOAD_COUNT=1
for cnt in $(seq 1 $LOAD_COUNT)
do
kubectl create ns policy-demo${cnt}
kubectl create deployment --namespace=policy-demo${cnt} nginx --image=nginx
kubectl scale deployment --namespace=policy-demo${cnt} nginx --replicas=2
kubectl expose --namespace=policy-demo${cnt} deployment nginx --port=80
@bikram20
bikram20 / gist:3b85438c691ecf3a0626f24b26aa9fd3
Created November 30, 2019 00:21
Rego example and input - for rego playground
package kubernetes.admission
block_latest_image_tag[explanation] {
input.request.kind.kind == "Pod"
containers := input.request.object.spec.containers
image_name := containers[_].image
is_image_tag_latest(image_name)
explanation := sprintf("resources should not use latest tag: %v", [image_name])
}