Setup
bin/kafka-topics.sh \
--zookeeper zookeeper.example.com:2181 \
--create \
ssh-keygen -t rsa -b 4096 -m PEM -f jwtRS256.key | |
# Don't add passphrase | |
openssl rsa -in jwtRS256.key -pubout -outform PEM -out jwtRS256.key.pub | |
cat jwtRS256.key | |
cat jwtRS256.key.pub |
Unofficial AKS Cheat Sheet
Official AKS FAQ is here
# Use envFrom to load Secrets and ConfigMaps into environment variables | |
apiVersion: apps/v1beta2 | |
kind: Deployment | |
metadata: | |
name: mans-not-hot | |
labels: | |
app: mans-not-hot | |
spec: | |
replicas: 1 |
$ jq -r '. | to_entries[] | "export \(.key)=\(@sh "\(.value)")"' ~/tmp/x.json
export URL='https://foo:bar@example.com/'
export OTHER='foo " bar '\''baz'\'''
I run several K8S cluster on EKS and by default do not setup inbound SSH to the nodes. Sometimes I need to get into each node to check things or run a one-off tool.
Rather than update my terraform, rebuild the launch templates and redeploy brand new nodes, I decided to use kubernetes to access each node directly.
# show indices on this host | |
curl 'localhost:9200/_cat/indices?v' | |
# edit elasticsearch configuration file to allow remote indexing | |
sudo vi /etc/elasticsearch/elasticsearch.yml | |
## copy the line below somewhere in the file | |
>>> | |
# --- whitelist for remote indexing --- | |
reindex.remote.whitelist: my-remote-machine.my-domain.com:9200 |