Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
k3s in LXC on Proxmox

On the host

Ensure these modules are loaded

cat /proc/sys/net/bridge/bridge-nf-call-iptables

Disable swap

sysctl vm.swappiness=0
swapoff -a

Enable IP Forwarding

The first time I tried to get this working, once the cluster was up, the traefik pods were in CrashloopBackoff due to ip_forwarding being disabled. Since LXC containers share the host's kernel, we need to enable this on the host.

echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf
sysctl --system

Create the k3s container

Uncheck unprivileged container

general.png

Set swap to 0

memory.png

Enable DHCP

network.png

Results

confirm.png

Back on the Host

Edit the config file for the container (/etc/pve/lxc/$ID.conf) and add the following:

lxc.apparmor.profile: unconfined
lxc.cgroup.devices.allow: a
lxc.cap.drop:
lxc.mount.auto: "proc:rw sys:rw"

In the container

/etc/rc.local

/etc/rc.local doesn't exist in the default 20.04 LXC template provided by Rroxmox. Create it with these contents:

#!/bin/sh -e

# Kubeadm 1.15 needs /dev/kmsg to be there, but it's not in lxc, but we can just use /dev/console instead
# see: https://github.com/kubernetes-sigs/kind/issues/662
if [ ! -e /dev/kmsg ]; then
    ln -s /dev/console /dev/kmsg
fi

# https://medium.com/@kvaps/run-kubernetes-in-lxc-container-f04aa94b6c9c
mount --make-rshared /

Then run this:

chmod +x /etc/rc.local
reboot

Installing k8s

k3sup Installation

Assuming $HOME/bin is in your PATH:

curl -sLS https://get.k3sup.dev | sh
mv k3sup ~/bin/k3sup && chmod +x ~/bin/k3sup

k8s Installation

k3sup install --ip $CONTAINER_IP --user root

Test

KUBECONFIG=kubeconfig kubectl get pods --all-namespaces
NAMESPACE     NAME                                     READY   STATUS      RESTARTS   AGE
kube-system   metrics-server-7566d596c8-zm7tj          1/1     Running     0          69m
kube-system   local-path-provisioner-6d59f47c7-ldbcl   1/1     Running     0          69m
kube-system   helm-install-traefik-glt48               0/1     Completed   0          69m
kube-system   coredns-7944c66d8d-67lxp                 1/1     Running     0          69m
kube-system   traefik-758cd5fc85-wzcst                 1/1     Running     0          68m
kube-system   svclb-traefik-cwd9h                      2/2     Running     0          42m

References

@Ramblurr
Copy link

Ramblurr commented Dec 28, 2020

This was helpful, thanks for sharing. The same setup also works for microk8s!

The only addition for microk8s is to enable the fuse and nesting features with pct set $VMID --features fuse=1,nesting=1

@davosian
Copy link

davosian commented Jan 24, 2021

Hi @triangletodd, The very first step mentions to make sure that the following modules are loaded: cat /proc/sys/net/bridge/bridge-nf-call-iptables. When I run the cat command, I am getting 0 returned. Does this mean that the module is not loaded? If so, how can I get it loaded? Can I simply set this to 1?

@triangletodd
Copy link
Author

triangletodd commented Feb 4, 2021

Hi @triangletodd, The very first step mentions to make sure that the following modules are loaded: cat /proc/sys/net/bridge/bridge-nf-call-iptables. When I run the cat command, I am getting 0 returned. Does this mean that the module is not loaded? If so, how can I get it loaded? Can I simply set this to 1?

I believe this article should explain what you need. I've since foolishly migrated off of Proxmox, but I'm getting ready to migrate back. TL;DR I think you just need to modify your /etc/sysctl.conf. I will fact check this document as I migrate back to Proxmox and let you know for sure.

https://wiki.libvirt.org/page/Net.bridge.bridge-nf-call_and_sysctl.conf

Also, my apologies for the late reply as I have yet to solve my Github notification noise problem.

@davosian
Copy link

davosian commented Feb 4, 2021

Highly appreciated, thanks Todd!

I am actually thinking of going bare metal for my cluster in an effort to reduce complexity and the additional maintenance. May I ask why you have turned around?

Should I go down this rabbit hole, I can give you feedback on your guide as well, I will make sure to provide feedback myself. Just yesterday, I have set up two NUCs with Proxmox (now I have 3 in total). Currently I am checking my options for moving towards Kubernetes...

@triangletodd
Copy link
Author

triangletodd commented Mar 30, 2021

Highly appreciated, thanks Todd!

I am actually thinking of going bare metal for my cluster in an effort to reduce complexity and the additional maintenance. May I ask why you have turned around?

Should I go down this rabbit hole, I can give you feedback on your guide as well, I will make sure to provide feedback myself. Just yesterday, I have set up two NUCs with Proxmox (now I have 3 in total). Currently I am checking my options for moving towards Kubernetes...

I would only recommend against bare metal as proxmox and other bare metal hypervisors allow you the freedom to wipe your environment and start fresh very easily, it also allows you to easily spin up parallel environments and tinker. Obviously you can accomplish the same thing with docker compose or LXD/C on bare metal, but my personal tinkering time is limited and I quite enjoy having an interface to turn knobs and glance at. I would likely never use proxmox or esxi in a professional setting. K8S on cloud infra has been serving me for years there.

@triangletodd
Copy link
Author

triangletodd commented Mar 30, 2021

I am actually thinking of going bare metal for my cluster in an effort to reduce complexity and the additional maintenance. May I ask why you have turned around?

I stopped using proxmox because I had four hosts in a cluster and I found their clustering solution to be annoying and unnecessary for my use case. I migrated back simply for the UI and steered clear of the clustering features. There may be a better tool for what I’m doing, but I haven’t found it yet. Maybe even something as simple as webmin. Part of the reason I migrated back was not wanting to waste time worrying about this aspect of my environment and having more time to focus on k8s and other development things that interest me.

@davosian
Copy link

davosian commented Mar 31, 2021

I would only recommend against bare metal as proxmox and other bare metal hypervisors allow you the freedom to wipe your environment and start fresh very easily, it also allows you to easily spin up parallel environments and tinker. Obviously you can accomplish the same thing with docker compose or LXD/C on bare metal, but my personal tinkering time is limited and I quite enjoy having an interface to turn knobs and glance at. I would likely never use proxmox or esxi in a professional setting. K8S on cloud infra has been serving me for years there.

Absolutely valuable insight. Thanks for sharing!

@davosian
Copy link

davosian commented Mar 31, 2021

I stopped using proxmox because I had four hosts in a cluster and I found their clustering solution to be annoying and unnecessary for my use case. I migrated back simply for the UI and steered clear of the clustering features. There may be a better tool for what I’m doing, but I haven’t found it yet. Maybe even something as simple as webmin. Part of the reason I migrated back was not wanting to waste time worrying about this aspect of my environment and having more time to focus on k8s and other development things that interest me.

I am trying to use Ansible to automate my infrastructure (partly to learn it, partly with the intention to also easily be able to start fresh). That being said, I would miss out of Proxmox features like snapshotting or setting up additonal environments on the same machines.

@cmonty14
Copy link

cmonty14 commented Apr 29, 2021

Hello,
I follow the instructions, but I get this error when executing k3sup:

$ k3sup install --ip 192.168.100.110 --user root
Running: k3sup install
2021/04/29 21:51:19 192.168.100.110
Public IP: 192.168.100.110
[INFO]  Finding release for channel v1.19
[INFO]  Using v1.19.10+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.19.10+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.19.10+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s
Result: [INFO]  Finding release for channel v1.19
[INFO]  Using v1.19.10+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.19.10+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.19.10+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
[INFO]  systemd: Starting k3s
 Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.

Error: error received processing command: Process exited with status 127

Can you please advise how to fix this error?

@triangletodd
Copy link
Author

triangletodd commented Jul 7, 2021

I stopped using proxmox because I had four hosts in a cluster and I found their clustering solution to be annoying and unnecessary for my use case. I migrated back simply for the UI and steered clear of the clustering features. There may be a better tool for what I’m doing, but I haven’t found it yet. Maybe even something as simple as webmin. Part of the reason I migrated back was not wanting to waste time worrying about this aspect of my environment and having more time to focus on k8s and other development things that interest me.

I am trying to use Ansible to automate my infrastructure (partly to learn it, partly with the intention to also easily be able to start fresh). That being said, I would miss out of Proxmox features like snapshotting or setting up additonal environments on the same machines.

Ansible has community support for Proxmox and I've used it in the past. See: https://docs.ansible.com/ansible/latest/collections/community/general/proxmox_module.html

@triangletodd
Copy link
Author

triangletodd commented Jul 7, 2021

Hello,
I follow the instructions, but I get this error when executing k3sup:

$ k3sup install --ip 192.168.100.110 --user root
Running: k3sup install
2021/04/29 21:51:19 192.168.100.110
Public IP: 192.168.100.110
[INFO]  Finding release for channel v1.19
[INFO]  Using v1.19.10+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.19.10+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.19.10+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s
Result: [INFO]  Finding release for channel v1.19
[INFO]  Using v1.19.10+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.19.10+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.19.10+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
[INFO]  systemd: Starting k3s
 Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.

Error: error received processing command: Process exited with status 127

Can you please advise how to fix this error?

I would need the journald logs in order to assist. They would give more insight into what's causing the 127 exit code.

@djw4
Copy link

djw4 commented Jul 13, 2021

I am actually thinking of going bare metal for my cluster in an effort to reduce complexity and the additional maintenance. May I ask why you have turned around?

I stopped using proxmox because I had four hosts in a cluster and I found their clustering solution to be annoying and unnecessary for my use case. I migrated back simply for the UI and steered clear of the clustering features. There may be a better tool for what I’m doing, but I haven’t found it yet. Maybe even something as simple as webmin. Part of the reason I migrated back was not wanting to waste time worrying about this aspect of my environment and having more time to focus on k8s and other development things that interest me.

Perhaps check out harvester - that might be more what you're looking for? As you have an interest in k8s, considering the underlying 'hypervisor' is built on it, it may provide some interesting options for you to experiment with.

@clumbo
Copy link

clumbo commented Aug 17, 2021

How did you add disks to this, I see pool as empty

@fpragana
Copy link

fpragana commented Aug 28, 2021

the k3sup Installation is in host (PROXMOX), LXC (k3s) ou local station with rsa private key??

@ericp-us
Copy link

ericp-us commented Nov 5, 2021

I am actually thinking of going bare metal for my cluster in an effort to reduce complexity and the additional maintenance. May I ask why you have turned around?

I stopped using proxmox because I had four hosts in a cluster and I found their clustering solution to be annoying and unnecessary for my use case. I migrated back simply for the UI and steered clear of the clustering features. There may be a better tool for what I’m doing, but I haven’t found it yet. Maybe even something as simple as webmin. Part of the reason I migrated back was not wanting to waste time worrying about this aspect of my environment and having more time to focus on k8s and other development things that interest me.

Perhaps check out harvester - that might be more what you're looking for? As you have an interest in k8s, considering the underlying 'hypervisor' is built on it, it may provide some interesting options for you to experiment with.

Thanks, I'm on PVE and use Kubernetes with Rancher and this looks promising

@davegallant
Copy link

davegallant commented Nov 14, 2021

If you're running a Proxmox VE 7.0, it has switched to a pure cgroupv2 environment.

Updating /etc/pve/lxc/$ID.conf to:

lxc.cgroup2.devices.allow: c 10:200 rwm

and then installing the k3s 1.22 stream worked for me:

k3sup install --ip $CONTAINER_IP --user root --k3s-version v1.22.3+k3s1

@tabnul
Copy link

tabnul commented Dec 16, 2021

Nice guide but i dont see why the first step would be needed. it comes from the wireguard/wireshark reference link where i can understand that you want that.

i think you need to check for the following modules

  • overlay
  • br_netfilter

there are some issues with this , k3s logs errors on those 2 modules even IF they are loaded correctly . just search the proxmox forums on that.
Process: 123 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=1/FAILURE)
Process: 129 ExecStartPre=/sbin/modprobe overlay (code=exited, status=1/FAILURE)

above log comes from a functioning k3s cluster @lxc following your guide. it logs it on startup of k3s server..

@ajvpot
Copy link

ajvpot commented Jan 13, 2022

I got it to launch with

lxc.apparmor.profile: unconfined
lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.cap.drop:
lxc.mount.auto: cgroup:rw:force

but now I'm stuck at

kubelet.go:1423] "Failed to start ContainerManager" err="[open /proc/sys/vm/overcommit_memory: read-only file system, open /proc/sys/kernel/panic: read-only file system, open /proc/sys/kernel/panic_on_oops: read-only file system]"

@TopheC
Copy link

TopheC commented May 8, 2022

the k3sup Installation is in host (PROXMOX), LXC (k3s) ou local station with rsa private key??

From https://github.com/alexellis/k3sup, you must the k3sup setup from your local station with your private key.

@wuyue92tree
Copy link

wuyue92tree commented Sep 29, 2022

I install k3s success under lxc, but I got wrong metrics.

The cpu/memory result is not correct which belong to physical host but not lxc.

kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"


{"kind":"NodeMetricsList","apiVersion":"metrics.k8s.io/v1beta1","metadata":{},"items":[{"metadata":{"name":"k3s-node-1","creationTimestamp":"2022-09-29T13:00:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/instance-type":"k3s","beta.kubernetes.io/os":"linux","egress.k3s.io/cluster":"true","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"k3s-node-1","kubernetes.io/os":"linux","node-role.kubernetes.io/control-plane":"true","node-role.kubernetes.io/master":"true","node.kubernetes.io/instance-type":"k3s"}},"timestamp":"2022-09-29T13:00:14Z","window":"1m0s","usage":{"cpu":"1316m","memory":"17094732Ki"}},{"metadata":{"name":"k3s-node-2","creationTimestamp":"2022-09-29T13:00:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/instance-type":"k3s","beta.kubernetes.io/os":"linux","egress.k3s.io/cluster":"true","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"k3s-node-2","kubernetes.io/os":"linux","node.kubernetes.io/instance-type":"k3s"}},"timestamp":"2022-09-29T13:00:14Z","window":"1m0s","usage":{"cpu":"952m","memory":"16942476Ki"}},{"metadata":{"name":"k3s-node-3","creationTimestamp":"2022-09-29T13:00:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/instance-type":"k3s","beta.kubernetes.io/os":"linux","egress.k3s.io/cluster":"true","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"k3s-node-3","kubernetes.io/os":"linux","node.kubernetes.io/instance-type":"k3s"}},"timestamp":"2022-09-29T13:00:14Z","window":"1m0s","usage":{"cpu":"888m","memory":"16932068Ki"}}]}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment