Skip to content

Instantly share code, notes, and snippets.

@ondrajz
Last active January 26, 2021 19:24
Show Gist options
  • Save ondrajz/2bb51ca71539806100ec99c809b809f7 to your computer and use it in GitHub Desktop.
Save ondrajz/2bb51ca71539806100ec99c809b809f7 to your computer and use it in GitHub Desktop.
Access Remote Kubernetes Clusters Locally

Access Remote Clusters Locally

Following guide will allow you to run any kubectl commands on your local machine to access clusters running remotely (on AWS.. etc) without need to setup any public access to the cluster.

How?

Instead of ssh-ing into remote host to run kubectl commands you can use SSH port forwarding for the port on the remote cluster where your Kubernetes API is listening and use same kubeconfig as the one on the remote machine.

What if I run local clusters too?

If you are running some clusters locally as well, you will still able able to access them. This can be achieved by merging kubeconfig files.

Step by Step Guide

Copy the remote kubeconfig to your host by running following command on your local machine:

$ ssh USER@REMOTE kubectl config view --raw > "$HOME/.kube/remote"

Verify kubeconfig contents on your local machine using kubectl:

$ kubectl --kubeconfig="$HOME/.kube/remote" config view
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://127.0.0.1:42863
  name: kind-kind-1
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://127.0.0.1:36719
  name: kind-kind-2
contexts:
- context:
    cluster: kind-kind-1
    user: kind-kind-1
  name: kind-kind-1
- context:
    cluster: kind-kind-2
    user: kind-kind-2
  name: kind-kind-2
current-context: kind-kind-2
users:
- name: kind-kind-1
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
- name: kind-kind-2
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

Start SSH port forwarding for each port from the server addresses of clusters in the kubeconfig file:

$ ssh -v -NL 42863:127.0.0.1:42863 -NL 36719:127.0.0.1:36719 USER@HOST

Run any kubectl commands for the remote cluster by specifying remote kubeconfig:

# For current context
kubectl --kubeconfig="$HOME/.kube/remote" get pod -A

# For different context
kubectl --kubeconfig="$HOME/.kube/remote" --context="kind-kind-2" get pod -A

If you want to access any cluster, local or remote, without having to specify --kubeconfig flag you can set env var KUBECONFIG to path of both kubeconfigs separated by colon (KUBECONFIG="path/to/kubeconfig1:path/to/kubeconfig2")

# Merge kubeconfigs together
$ export KUBECONFIG="$HOME/.kube/config:$HOME/.kube/remote"

# Access local cluster
$ kubectl --context=kind-local get pod -A

# Access remote cluster
$ kubectl --context=kind-kind-2 get pod -A

⚠️ NOTE: Multiple kubeconfig can only be used in KUBECONFIG env var, it does not work for --kubeconfig flag. More details about kubectl rules for loading kubeconfigs.

Helper script

Here is a helper script that will start SSH port forwarding for all the ports of clusters in kubeconfig.

#!/bin/bash

# Extract cluster addresses
clusters=$(kubectl config view -o go-template='{{range .clusters}}{{printf "%v\n" .cluster.server}}{{end}}')

args=($*)

# Extract ports
while IFS= read -r addr; do
    port=${addr##*:}
    args ="$args -NL $port:127.0.0.1:$port"
done <<< "$clusters"

echo "Starting port forwarding: $args"
eval ssh ${args[@]:-}

Run this script for remote kubeconfig to automatically start port forwarding for all ports in cluster server addresses:

$ KUBECONFIG="$HOME/.kube/remote" ./kubeconfig-port-forward.sh 
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment