Skip to content

Instantly share code, notes, and snippets.

View maksim-paskal's full-sized avatar
🏠
Working from home

Maksim Paskal maksim-paskal

🏠
Working from home
View GitHub Profile
#!/bin/sh
auth=$(aws sts get-session-token --profile prod-static \
--serial-number arn:aws:iam::<mfa> \
--duration-seconds 129600 \
--token-code "$1")
aws --profile prod configure set region us-east-1
aws --profile prod configure set aws_access_key_id "$(echo $auth | jq -r .Credentials.AccessKeyId)"
aws --profile prod configure set aws_secret_access_key "$(echo $auth | jq -r .Credentials.SecretAccessKey)"
#!/bin/bash
TMPDIR=$(mktemp -d $MKTEMP_BASEDIR)
function check_service {
mkdir -p $TMPDIR/logs/
journalctl -n 100000 --unit=k3s > $TMPDIR/logs/journalctl-k3s
}
@maksim-paskal
maksim-paskal / change-ami-to-aws-eks.md
Created February 23, 2021 14:24
Simple bash script to change AMI in AWS EKS cluster

Simple envoy configuration with basic authentication and without authorization service

Sometime you need scrape prometheus metrics from external envoy that deploy not to kubernetes environment

You can use iptable or other stuff on external server to allow only trusted IP for scraping metrics - but for dynamic infrastructure some time it's hard to support it.

Envoy can expose this metrics more elegant style - using basic auth

Simple envoy.yaml

PUT _cluster/settings
{
"persistent": {
"cluster.routing.allocation.disk.threshold_enabled": true,
"cluster.routing.allocation.disk.watermark.low": "93%",
"cluster.routing.allocation.disk.watermark.high": "95%",
"cluster.info.update.interval": "5m"
}
}

Class SRE Implements DevOps

Site Reliability Engineering (SRE) and DevOps are two ideas that have different origins but the same underlying objectives.

DevOps advocates for certain practices that increase success and productivity in a team building and running software. These include:

  1. Reduce organization silos
  2. Accept failure as normal
  3. Implement gradual change
  4. Leverage automation and tooling
apiVersion: apps/v1
kind: Deployment
metadata:
name: maintance-pod
spec:
selector:
matchLabels:
app: maintance-pod
replicas: 1
template:
kubectl delete pods --field-selector=status.phase=Failed -A
kubectl delete pods --field-selector=status.phase=Evicted -A
kubectl delete pods --field-selector=status.phase=Succeeded -A