Skip to content

Instantly share code, notes, and snippets.

@ammarhaiderak
Last active August 2, 2024 12:05
Show Gist options
  • Save ammarhaiderak/af29b21222c6a37c12a4f08de3ffe216 to your computer and use it in GitHub Desktop.
Save ammarhaiderak/af29b21222c6a37c12a4f08de3ffe216 to your computer and use it in GitHub Desktop.
DevOps Cheat Sheet

Find process by port

sudo ss -lptn 'sport = :9000' 

Find process by process id

ps -p 1366 -o comm=

Export all Environment Variables in an env file

source './.env.sample'
export $(cut -d= -f1 './.env.sample')

Or

set -a

Removes quotes from a string

sed -e 's/^"//' -e 's/"$//'

Start Application in background

npm run build:client:watch > ~/build_client_watch.log 2>&1 &
npm run build:server:watch > ~/build_server_watch.log 2>&1 &
npm run start:server:watch > ~/start_server_watch.log 2>&1 &

AWS switch k8 cluster context

aws eks --region us-east-1 update-kubeconfig --name eks-prod-default

Assign role to a kubernetes Node

kubectl label node <node name> node-role.kubernetes.io/<role>=<role>

Example:

kubectl label node ip-10-10-0-47.ec2.internal node-role.kubernetes.io/main=

Find resources bound with a Node

kubectl get all --field-selector='spec.nodeName=ip-10-10-0-76.ec2.internal'

AWS login ECR Docker Repository

aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin <AWS-Account-ID>.dkr.ecr.us-east-1.amazonaws.com

Run pgAdmin Docker

docker run -p 80:80 \
    -e 'PGADMIN_DEFAULT_EMAIL=username' \
    -e 'PGADMIN_DEFAULT_PASSWORD=pass' \
    -d dpage/pgadmin4

Run Rabbitmq Docker

docker run -d --hostname localhost --name some-rabbit -p 5672:5672 -e RABBITMQ_DEFAULT_USER=admin -e RABBITMQ_DEFAULT_PASS="password" rabbitmq:3-management

Suspend a CronJob Kubernetes

kubectl patch cronjob.batch/your-cronjob-name -p '{"spec":{"suspend":true}}'

remove duplicates from file

sort duplicate-file.txt | uniq > unique-file.txt

Build docker image for different CPU Architecture

docker buildx build . --platform linux/amd64 -t keycloak-bcrypt:amd64

GCP SSH

you'd need to enable tunnel-through-iap from IAM

gcloud compute ssh --zone "us-central1-a" "vm-intance-name" --project "prj-test-eng-ab98" --tunnel-through-iap

Check Assigned Roles for a pg_role

SELECT 
    * 
FROM 
    pg_roles 
WHERE 
    oid IN (SELECT 
                roleid 
            FROM 
                pg_auth_members 
            WHERE 
                member=(SELECT oid FROM pg_roles WHERE rolname='replicate_prod_master'));

Check Current Project GCP

gcloud config get project

Set / Change Current Project GCP

gcloud config set project

List down deployments that have at least one pod running

kubectl get deployments -o=jsonpath='{range .items[?(@.spec.replicas > 0)]}{.metadata.name}{"\n"}{end}'

Grant a role to service-account on specific resource

gsutil iam ch serviceAccount:service-account-name@project-id.iam.gserviceaccount.com:roles/storage.admin gs://somebucket

Grant kms key access to Service Account

gcloud kms keys add-iam-policy-binding vault-kms --location global --keyring vault-auto-unseal-kr --member serviceAccount:some-sa@project-id.iam.gserviceaccount.com  --role roles/cloudkms.admin

Execute linux command in background without hangup / process exit

nohup bash backup-data-script.sh >> payments-logs.log 2>&1 &
  • nohup: no hangup prevents process exit when you execute any command after this command
  • >> : logs the output to the file
  • 2>&1: stderr to stdout redirection to log errors

Convert Route53 DNS records to csv

aws route53 list-resource-record-sets --hosted-zone-id "/zone/ZU8UWIY5O0XYZ" | jq -r '.ResourceRecordSets[] | [.Name, .Type, (.ResourceRecords[]? | .Value), .AliasTarget.DNSName?] | @csv'

Export zone off of AWS

cli53 export --full ZU8UWIY5O0XYZ > output

import DNS Records to zone on GCP

gcloud dns record-sets import --zone-file-format input-file -z="target-zone-on-gcp" --delete-all-existing

Export zone off of GCP

gcloud dns record-sets export target-file.txt -z=zone-on-gcp

Create zone GCP

gcloud dns managed-zones create --dns-name=yourzone.com \                                                
--description='description' yourzone-com

Compress directories and exclude specific directories

sudo tar --exclude='*/node_modules'  --exclude='*/.pycache'  --exclude='*/.next' -zcvf target.tgz source.dir

Attach volume to EC2

aws ec2 attach-volume --region=us-east-1 --volume-id=vol-0449c1beee1eb8b31 --instance-id=i-02ed5c770220e38ee --device=/dev/sdk

Readonly Mount

following mounts /dev/sda1 onto /media/2tb

sudo mkdir /media/2tb
sudo mount -o ro /dev/sda1 /media/2tb

Copy files to and from remote linux machine

sudo scp -i ~/.ssh/ssh-private-key source/path ubuntu@x.x.x.x:/home/ubuntu/targetDir

Grant Compute Access GCP

gcloud compute instances add-iam-policy-binding instance-name-here \
    --member='user:user@email.com' \
    --role='roles/compute.admin' \
    --zone="us-west1-b"

if-else with jq

Example payload

 { "Contents": 
  [
    { 
     "Key" "s3 key",
     "RestoreStatus: {...}
    },
    { 
     "Key" "s3 key",
     "RestoreStatus: {...}
    }
  ]
 }
aws s3api list-objects --optional-object-attributes RestoreStatus --bucket aqapop | jq -r '.Contents | map(if has("RestoreStatus") then "already restored:"+.Key else .Key end) | .[]'

Lets break down the jq command

jq -r '.Contents | map(if has("RestoreStatus") then "already restored:"+.Key else .Key end) | .[]'
  • -r: removes the quotes from the final value(s)
  • .Contents returns the Content array (which can be mapped)
  • has("RestoreStatus") checks if the key exists for the object
  • .[] returns the values of the list i.e without '[]' in the

Remove particular string from whole file

sed -e 's!gs://aqapop/!!' gcp-aqapop-list.txt > gcp-aqapop-list-cleaned.txt

Here we are removing gs://aqapop/ from every line of the file

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment