Following guide will allow you to run any kubectl
commands on your local machine to access clusters running remotely (on AWS.. etc) without need to setup any public access to the cluster.
Instead of ssh
-ing into remote host to run kubectl
commands you can use SSH port forwarding for the port on the remote cluster where your Kubernetes API is listening and use same kubeconfig as the one on the remote machine.
If you are running some clusters locally as well, you will still able able to access them. This can be achieved by merging kubeconfig files.
Copy the remote kubeconfig to your host by running following command on your local machine:
$ ssh USER@REMOTE kubectl config view --raw > "$HOME/.kube/remote"
Verify kubeconfig contents on your local machine using kubectl:
$ kubectl --kubeconfig="$HOME/.kube/remote" config view
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://127.0.0.1:42863
name: kind-kind-1
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://127.0.0.1:36719
name: kind-kind-2
contexts:
- context:
cluster: kind-kind-1
user: kind-kind-1
name: kind-kind-1
- context:
cluster: kind-kind-2
user: kind-kind-2
name: kind-kind-2
current-context: kind-kind-2
users:
- name: kind-kind-1
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
- name: kind-kind-2
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
Start SSH port forwarding for each port from the server addresses of clusters in the kubeconfig file:
$ ssh -v -NL 42863:127.0.0.1:42863 -NL 36719:127.0.0.1:36719 USER@HOST
Run any kubectl commands for the remote cluster by specifying remote kubeconfig:
# For current context
kubectl --kubeconfig="$HOME/.kube/remote" get pod -A
# For different context
kubectl --kubeconfig="$HOME/.kube/remote" --context="kind-kind-2" get pod -A
If you want to access any cluster, local or remote, without having to specify --kubeconfig
flag you can set env var KUBECONFIG
to path of both kubeconfigs separated by colon (KUBECONFIG="path/to/kubeconfig1:path/to/kubeconfig2"
)
# Merge kubeconfigs together
$ export KUBECONFIG="$HOME/.kube/config:$HOME/.kube/remote"
# Access local cluster
$ kubectl --context=kind-local get pod -A
# Access remote cluster
$ kubectl --context=kind-kind-2 get pod -A
KUBECONFIG
env var, it does not work for --kubeconfig
flag. More details about kubectl rules for loading kubeconfigs.
Here is a helper script that will start SSH port forwarding for all the ports of clusters in kubeconfig.
#!/bin/bash
# Extract cluster addresses
clusters=$(kubectl config view -o go-template='{{range .clusters}}{{printf "%v\n" .cluster.server}}{{end}}')
args=($*)
# Extract ports
while IFS= read -r addr; do
port=${addr##*:}
args ="$args -NL $port:127.0.0.1:$port"
done <<< "$clusters"
echo "Starting port forwarding: $args"
eval ssh ${args[@]:-}
Run this script for remote kubeconfig to automatically start port forwarding for all ports in cluster server addresses:
$ KUBECONFIG="$HOME/.kube/remote" ./kubeconfig-port-forward.sh