Skip to content

Instantly share code, notes, and snippets.

@janeczku
Last active January 12, 2021 16:12
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save janeczku/72df4d9711efdc6f543332eef9bec08e to your computer and use it in GitHub Desktop.
Save janeczku/72df4d9711efdc6f543332eef9bec08e to your computer and use it in GitHub Desktop.
How-to: Connect an air-gapped k3s cluster to Rancher via enterprise proxy

How-to: Connect an air-gapped k3s cluster to Rancher via enterprise proxy

    +----------------+
    |  Rancher Mgmt  |
    +--------+-------+
             ^
             |
             |   Firewall
+--------------------------+
             |
     +-------+------+
     |  HTTP PROXY  |
     +-------+------+
             ^
             |
   +------------------+
   |  +-------------+ |
   |  |Rancher Agent| |
   |  +-------------+ |
   |     k3s node     |
   +------------------+

I. Provision k3s cluster

Perform the following steps on the node designated to run the downstream k3s cluster (assuming single node deployment).

  1. Setup HTTP proxy environment (important: make sure to use the same NO_PROXY variable content as below, you may append your own entries to it).
export HTTP_PROXY=http://10.135.163.253:8888/
export HTTPS_PROXY=http://10.135.163.253:8888/
export NO_PROXY=localhost,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.svc,.cluster.local
  1. Install K3s using the installation script

Note: We are going to use the PodPreset admission controller to inject HTTP proxy environment in the different agent pods. We need to enable both the admissions controller and settings.k8s.io/v1alpha1 API during the installation.

Ref: https://rancher.com/docs/k3s/latest/en/installation/

curl -sfL https://get.k3s.io |
    INSTALL_K3S_SKIP_START=true \
    INSTALL_K3S_VERSION="v1.19.5+k3s2" \
    K3S_TOKEN=changeme \
    sh -s - \
    --kube-apiserver-arg=enable-admission-plugins=NodeRestriction,PodPreset \
    --kube-apiserver-arg=runtime-config=settings.k8s.io/v1alpha1=true
  1. Verify the systemd unit environment file contains the proxy configuration:
$ cat /etc/systemd/system/k3s.service.env

K3S_TOKEN=changeme
HTTPS_PROXY=http://10.135.163.253:8888/
NO_PROXY=localhost,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.svc,.cluster.local
HTTP_PROXY=http://10.135.163.253:8888/
  1. Start k3s
$ systemctl start k3s
  1. Either continue to use the remote shell to interact with the k3s cluster or copy the Kubeconfig to your local workstation.
$ cat /etc/rancher/k3s/k3s.yaml

While copying the Kubeconfig you will need to replace the value of the server option with the host IP address of the k3s node.

II. Configure PodPreset to inject proxy configuration in agent pods

In order to allow the Rancher and Fleet agents to register with Rancher over the HTTP proxy we are going to create a PodPreset resource in two namespaces to automatically inject the required environmente variables.

First create the agent namespaces:

kubectl create ns fleet-system
kubectl create ns cattle-system

Then create the following PodPreset in each of the namespaces (remember to adjust the proxy variables to be identical to the ones configured previously).

cat <<'EOF' >> podpreset.yaml
apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
metadata:
  name: inject-proxy-rancher-agents
spec:
  selector:
    matchExpressions:
    - key: app
      operator: In
      values:
      - cattle-cluster-agent
      - cattle-agent
      - fleet-agent
  env:
    - name: HTTP_PROXY
      value: 'http://10.135.163.253:8888/'
    - name: HTTPS_PROXY
      value: 'http://10.135.163.253:8888/'
    - name: NO_PROXY
      value: 'localhost,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.svc,.cluster.local'
EOF
kubectl create -n cattle-system -f podpreset.yaml
kubectl create -n fleet-system -f podpreset.yaml

III. Import cluster into Rancher Management Server

Now import the k3s cluster in Rancher using the documented steps: https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/imported-clusters/

After importing the provided YAML manifest check that the cattle-cluster-agent pod is running with status ready:

kubectl get pods -n cattle-system
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment