Skip to content

Instantly share code, notes, and snippets.

@xandout
Last active March 9, 2024 17:42
Show Gist options
  • Star 21 You must be signed in to star a gist
  • Fork 7 You must be signed in to fork a gist
  • Save xandout/8d24558c75c53f3cb8bf0a97ec25fcfc to your computer and use it in GitHub Desktop.
Save xandout/8d24558c75c53f3cb8bf0a97ec25fcfc to your computer and use it in GitHub Desktop.
Kubernetes DaemonSet that enables a direct shell on each Node using SSH to localhost

Getting a shell on each node

I run several K8S cluster on EKS and by default do not setup inbound SSH to the nodes. Sometimes I need to get into each node to check things or run a one-off tool.

Rather than update my terraform, rebuild the launch templates and redeploy brand new nodes, I decided to use kubernetes to access each node directly.

Alternative option

https://github.com/alexei-led/nsenter

How it works

Attached is a DaemonSet manifest that mounts /home/ec2-user/.ssh/authorized_keys into a pod on each node. The pod will then create a new SSH keypair for each node, removing old entries and install the public key.

How to use it

Update the manifest to reflect the proper user for your nodes. I use the AWS Linux AMI so the user is ec2-user, yours may be something else depending on AMI.

Apply the DaemonSet

kubectl apply -f daemonset.yml

Connect to a node

# kubectl exec -it -n kube-system node-connect-q529c connect
Last login: Mon Aug 24 16:32:25 2020 from localhost

       __|  __|_  )
       _|  (     /   Amazon Linux 2 AMI
      ___|\___|___|

https://aws.amazon.com/amazon-linux-2/
14 package(s) needed for security, out of 40 available
Run "sudo yum update" to apply all updates.
[ec2-user@ip-172-16-38-42 ~]$ whoami
ec2-user
[ec2-user@ip-172-16-38-42 ~]$ w
 16:40:20 up 39 days,  9:43,  1 user,  load average: 0.43, 0.58, 0.49
USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT
ec2-user pts/0    localhost        16:40    4.00s  0.02s  0.00s w
[ec2-user@ip-172-16-38-42 ~]$ 
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-connect
namespace: kube-system
spec:
selector:
matchLabels:
name: node-connect
template:
metadata:
labels:
name: node-connect
spec:
containers:
- name: node-connect
image: kroniak/ssh-client
command:
- /bin/bash
- -c
- |
ssh-keygen -t rsa -b 4096 -C node-connect -N "" -f ~/.ssh/id_rsa
sed -i '/.*node-connect/d' /host/authorized_keys
cat ~/.ssh/id_rsa.pub >> /host/authorized_keys
ssh-keyscan -H localhost >> ~/.ssh/known_hosts
cat <<EOF > /bin/connect
#!/bin/sh
ssh ${SSH_USER}@localhost
EOF
chmod +x /bin/connect
cat /host/authorized_keys
/bin/cat
env:
- name: SSH_USER
value: ec2-user
resources:
limits:
cpu: 200m
memory: 100Mi
requests:
cpu: 100m
memory: 50Mi
stdin: true
volumeMounts:
- name: host-ssh-authorizedkeys
mountPath: /host
volumes:
- name: host-ssh-authorizedkeys
hostPath:
path: /home/ec2-user/.ssh
hostNetwork: true
@sahaniarun
Copy link

very nice..it worked for my local cluster

@blessedwithsins
Copy link

Awesome stuff, thanks a ton. :)

@aisangelos
Copy link

effing brilliant

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment