Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
k3s in LXC on Proxmox

On the host

Ensure these modules are loaded

cat /proc/sys/net/bridge/bridge-nf-call-iptables

Disable swap

sysctl vm.swappiness=0
swapoff -a

Enable IP Forwarding

The first time I tried to get this working, once the cluster was up, the traefik pods were in CrashloopBackoff due to ip_forwarding being disabled. Since LXC containers share the host's kernel, we need to enable this on the host.

echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf
sysctl --system

Create the k3s container

Uncheck unprivileged container

general.png

Set swap to 0

memory.png

Enable DHCP

network.png

Results

confirm.png

Back on the Host

Edit the config file for the container (/etc/pve/lxc/$ID.conf) and add the following:

lxc.apparmor.profile: unconfined
lxc.cgroup.devices.allow: a
lxc.cap.drop:
lxc.mount.auto: "proc:rw sys:rw"

In the container

/etc/rc.local

/etc/rc.local doesn't exist in the default 20.04 LXC template provided by Rroxmox. Create it with these contents:

#!/bin/sh -e

# Kubeadm 1.15 needs /dev/kmsg to be there, but it's not in lxc, but we can just use /dev/console instead
# see: https://github.com/kubernetes-sigs/kind/issues/662
if [ ! -e /dev/kmsg ]; then
    ln -s /dev/console /dev/kmsg
fi

# https://medium.com/@kvaps/run-kubernetes-in-lxc-container-f04aa94b6c9c
mount --make-rshared /

Then run this:

chmod +x /etc/rc.local
reboot

Installing k8s

k3sup Installation

Assuming $HOME/bin is in your PATH:

curl -sLS https://get.k3sup.dev | sh
mv k3sup ~/bin/k3sup && chmod +x ~/bin/k3sup

k8s Installation

k3sup install --ip $CONTAINER_IP --user root

Test

KUBECONFIG=kubeconfig kubectl get pods --all-namespaces
NAMESPACE     NAME                                     READY   STATUS      RESTARTS   AGE
kube-system   metrics-server-7566d596c8-zm7tj          1/1     Running     0          69m
kube-system   local-path-provisioner-6d59f47c7-ldbcl   1/1     Running     0          69m
kube-system   helm-install-traefik-glt48               0/1     Completed   0          69m
kube-system   coredns-7944c66d8d-67lxp                 1/1     Running     0          69m
kube-system   traefik-758cd5fc85-wzcst                 1/1     Running     0          68m
kube-system   svclb-traefik-cwd9h                      2/2     Running     0          42m

References

@Ramblurr

This comment has been minimized.

Copy link

@Ramblurr Ramblurr commented Dec 28, 2020

This was helpful, thanks for sharing. The same setup also works for microk8s!

The only addition for microk8s is to enable the fuse and nesting features with pct set $VMID --features fuse=1,nesting=1

@davosian

This comment has been minimized.

Copy link

@davosian davosian commented Jan 24, 2021

Hi @triangletodd, The very first step mentions to make sure that the following modules are loaded: cat /proc/sys/net/bridge/bridge-nf-call-iptables. When I run the cat command, I am getting 0 returned. Does this mean that the module is not loaded? If so, how can I get it loaded? Can I simply set this to 1?

@triangletodd

This comment has been minimized.

Copy link
Owner Author

@triangletodd triangletodd commented Feb 4, 2021

Hi @triangletodd, The very first step mentions to make sure that the following modules are loaded: cat /proc/sys/net/bridge/bridge-nf-call-iptables. When I run the cat command, I am getting 0 returned. Does this mean that the module is not loaded? If so, how can I get it loaded? Can I simply set this to 1?

I believe this article should explain what you need. I've since foolishly migrated off of Proxmox, but I'm getting ready to migrate back. TL;DR I think you just need to modify your /etc/sysctl.conf. I will fact check this document as I migrate back to Proxmox and let you know for sure.

https://wiki.libvirt.org/page/Net.bridge.bridge-nf-call_and_sysctl.conf

Also, my apologies for the late reply as I have yet to solve my Github notification noise problem.

@davosian

This comment has been minimized.

Copy link

@davosian davosian commented Feb 4, 2021

Highly appreciated, thanks Todd!

I am actually thinking of going bare metal for my cluster in an effort to reduce complexity and the additional maintenance. May I ask why you have turned around?

Should I go down this rabbit hole, I can give you feedback on your guide as well, I will make sure to provide feedback myself. Just yesterday, I have set up two NUCs with Proxmox (now I have 3 in total). Currently I am checking my options for moving towards Kubernetes...

@triangletodd

This comment has been minimized.

Copy link
Owner Author

@triangletodd triangletodd commented Mar 30, 2021

Highly appreciated, thanks Todd!

I am actually thinking of going bare metal for my cluster in an effort to reduce complexity and the additional maintenance. May I ask why you have turned around?

Should I go down this rabbit hole, I can give you feedback on your guide as well, I will make sure to provide feedback myself. Just yesterday, I have set up two NUCs with Proxmox (now I have 3 in total). Currently I am checking my options for moving towards Kubernetes...

I would only recommend against bare metal as proxmox and other bare metal hypervisors allow you the freedom to wipe your environment and start fresh very easily, it also allows you to easily spin up parallel environments and tinker. Obviously you can accomplish the same thing with docker compose or LXD/C on bare metal, but my personal tinkering time is limited and I quite enjoy having an interface to turn knobs and glance at. I would likely never use proxmox or esxi in a professional setting. K8S on cloud infra has been serving me for years there.

@triangletodd

This comment has been minimized.

Copy link
Owner Author

@triangletodd triangletodd commented Mar 30, 2021

I am actually thinking of going bare metal for my cluster in an effort to reduce complexity and the additional maintenance. May I ask why you have turned around?

I stopped using proxmox because I had four hosts in a cluster and I found their clustering solution to be annoying and unnecessary for my use case. I migrated back simply for the UI and steered clear of the clustering features. There may be a better tool for what I’m doing, but I haven’t found it yet. Maybe even something as simple as webmin. Part of the reason I migrated back was not wanting to waste time worrying about this aspect of my environment and having more time to focus on k8s and other development things that interest me.

@davosian

This comment has been minimized.

Copy link

@davosian davosian commented Mar 31, 2021

I would only recommend against bare metal as proxmox and other bare metal hypervisors allow you the freedom to wipe your environment and start fresh very easily, it also allows you to easily spin up parallel environments and tinker. Obviously you can accomplish the same thing with docker compose or LXD/C on bare metal, but my personal tinkering time is limited and I quite enjoy having an interface to turn knobs and glance at. I would likely never use proxmox or esxi in a professional setting. K8S on cloud infra has been serving me for years there.

Absolutely valuable insight. Thanks for sharing!

@davosian

This comment has been minimized.

Copy link

@davosian davosian commented Mar 31, 2021

I stopped using proxmox because I had four hosts in a cluster and I found their clustering solution to be annoying and unnecessary for my use case. I migrated back simply for the UI and steered clear of the clustering features. There may be a better tool for what I’m doing, but I haven’t found it yet. Maybe even something as simple as webmin. Part of the reason I migrated back was not wanting to waste time worrying about this aspect of my environment and having more time to focus on k8s and other development things that interest me.

I am trying to use Ansible to automate my infrastructure (partly to learn it, partly with the intention to also easily be able to start fresh). That being said, I would miss out of Proxmox features like snapshotting or setting up additonal environments on the same machines.

@cmonty14

This comment has been minimized.

Copy link

@cmonty14 cmonty14 commented Apr 29, 2021

Hello,
I follow the instructions, but I get this error when executing k3sup:

$ k3sup install --ip 192.168.100.110 --user root
Running: k3sup install
2021/04/29 21:51:19 192.168.100.110
Public IP: 192.168.100.110
[INFO]  Finding release for channel v1.19
[INFO]  Using v1.19.10+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.19.10+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.19.10+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s
Result: [INFO]  Finding release for channel v1.19
[INFO]  Using v1.19.10+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.19.10+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.19.10+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
[INFO]  systemd: Starting k3s
 Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.

Error: error received processing command: Process exited with status 127

Can you please advise how to fix this error?

@triangletodd

This comment has been minimized.

Copy link
Owner Author

@triangletodd triangletodd commented Jul 7, 2021

I stopped using proxmox because I had four hosts in a cluster and I found their clustering solution to be annoying and unnecessary for my use case. I migrated back simply for the UI and steered clear of the clustering features. There may be a better tool for what I’m doing, but I haven’t found it yet. Maybe even something as simple as webmin. Part of the reason I migrated back was not wanting to waste time worrying about this aspect of my environment and having more time to focus on k8s and other development things that interest me.

I am trying to use Ansible to automate my infrastructure (partly to learn it, partly with the intention to also easily be able to start fresh). That being said, I would miss out of Proxmox features like snapshotting or setting up additonal environments on the same machines.

Ansible has community support for Proxmox and I've used it in the past. See: https://docs.ansible.com/ansible/latest/collections/community/general/proxmox_module.html

@triangletodd

This comment has been minimized.

Copy link
Owner Author

@triangletodd triangletodd commented Jul 7, 2021

Hello,
I follow the instructions, but I get this error when executing k3sup:

$ k3sup install --ip 192.168.100.110 --user root
Running: k3sup install
2021/04/29 21:51:19 192.168.100.110
Public IP: 192.168.100.110
[INFO]  Finding release for channel v1.19
[INFO]  Using v1.19.10+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.19.10+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.19.10+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s
Result: [INFO]  Finding release for channel v1.19
[INFO]  Using v1.19.10+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.19.10+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.19.10+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
[INFO]  systemd: Starting k3s
 Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.

Error: error received processing command: Process exited with status 127

Can you please advise how to fix this error?

I would need the journald logs in order to assist. They would give more insight into what's causing the 127 exit code.

@djw4

This comment has been minimized.

Copy link

@djw4 djw4 commented Jul 13, 2021

I am actually thinking of going bare metal for my cluster in an effort to reduce complexity and the additional maintenance. May I ask why you have turned around?

I stopped using proxmox because I had four hosts in a cluster and I found their clustering solution to be annoying and unnecessary for my use case. I migrated back simply for the UI and steered clear of the clustering features. There may be a better tool for what I’m doing, but I haven’t found it yet. Maybe even something as simple as webmin. Part of the reason I migrated back was not wanting to waste time worrying about this aspect of my environment and having more time to focus on k8s and other development things that interest me.

Perhaps check out harvester - that might be more what you're looking for? As you have an interest in k8s, considering the underlying 'hypervisor' is built on it, it may provide some interesting options for you to experiment with.

@clumbo

This comment has been minimized.

Copy link

@clumbo clumbo commented Aug 17, 2021

How did you add disks to this, I see pool as empty

@fpragana

This comment has been minimized.

Copy link

@fpragana fpragana commented Aug 28, 2021

the k3sup Installation is in host (PROXMOX), LXC (k3s) ou local station with rsa private key??

@ericp-us

This comment has been minimized.

Copy link

@ericp-us ericp-us commented Nov 5, 2021

I am actually thinking of going bare metal for my cluster in an effort to reduce complexity and the additional maintenance. May I ask why you have turned around?

I stopped using proxmox because I had four hosts in a cluster and I found their clustering solution to be annoying and unnecessary for my use case. I migrated back simply for the UI and steered clear of the clustering features. There may be a better tool for what I’m doing, but I haven’t found it yet. Maybe even something as simple as webmin. Part of the reason I migrated back was not wanting to waste time worrying about this aspect of my environment and having more time to focus on k8s and other development things that interest me.

Perhaps check out harvester - that might be more what you're looking for? As you have an interest in k8s, considering the underlying 'hypervisor' is built on it, it may provide some interesting options for you to experiment with.

Thanks, I'm on PVE and use Kubernetes with Rancher and this looks promising

@davegallant

This comment has been minimized.

Copy link

@davegallant davegallant commented Nov 14, 2021

If you're running a Proxmox VE 7.0, it has switched to a pure cgroupv2 environment.

Updating /etc/pve/lxc/$ID.conf to:

lxc.cgroup2.devices.allow: c 10:200 rwm

and then installing the k3s 1.22 stream worked for me:

k3sup install --ip $CONTAINER_IP --user root --k3s-version v1.22.3+k3s1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment