Skip to content

Instantly share code, notes, and snippets.

@triangletodd
Last active April 10, 2024 13:40
Show Gist options
  • Save triangletodd/02f595cd4c0dc9aac5f7763ca2264185 to your computer and use it in GitHub Desktop.
Save triangletodd/02f595cd4c0dc9aac5f7763ca2264185 to your computer and use it in GitHub Desktop.
k3s in LXC on Proxmox

On the host

Ensure these modules are loaded

cat /proc/sys/net/bridge/bridge-nf-call-iptables

Disable swap

sysctl vm.swappiness=0
swapoff -a

Enable IP Forwarding

The first time I tried to get this working, once the cluster was up, the traefik pods were in CrashloopBackoff due to ip_forwarding being disabled. Since LXC containers share the host's kernel, we need to enable this on the host.

echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf
sysctl --system

Create the k3s container

Uncheck unprivileged container

general.png

Set swap to 0

memory.png

Enable DHCP

network.png

Results

confirm.png

Back on the Host

Edit the config file for the container (/etc/pve/lxc/$ID.conf) and add the following:

lxc.apparmor.profile: unconfined
lxc.cgroup.devices.allow: a
lxc.cap.drop:
lxc.mount.auto: "proc:rw sys:rw"

In the container

/etc/rc.local

/etc/rc.local doesn't exist in the default 20.04 LXC template provided by Rroxmox. Create it with these contents:

#!/bin/sh -e

# Kubeadm 1.15 needs /dev/kmsg to be there, but it's not in lxc, but we can just use /dev/console instead
# see: https://github.com/kubernetes-sigs/kind/issues/662
if [ ! -e /dev/kmsg ]; then
    ln -s /dev/console /dev/kmsg
fi

# https://medium.com/@kvaps/run-kubernetes-in-lxc-container-f04aa94b6c9c
mount --make-rshared /

Then run this:

chmod +x /etc/rc.local
reboot

Installing k8s

k3sup Installation

Assuming $HOME/bin is in your PATH:

curl -sLS https://get.k3sup.dev | sh
mv k3sup ~/bin/k3sup && chmod +x ~/bin/k3sup

k8s Installation

k3sup install --ip $CONTAINER_IP --user root

Test

KUBECONFIG=kubeconfig kubectl get pods --all-namespaces
NAMESPACE     NAME                                     READY   STATUS      RESTARTS   AGE
kube-system   metrics-server-7566d596c8-zm7tj          1/1     Running     0          69m
kube-system   local-path-provisioner-6d59f47c7-ldbcl   1/1     Running     0          69m
kube-system   helm-install-traefik-glt48               0/1     Completed   0          69m
kube-system   coredns-7944c66d8d-67lxp                 1/1     Running     0          69m
kube-system   traefik-758cd5fc85-wzcst                 1/1     Running     0          68m
kube-system   svclb-traefik-cwd9h                      2/2     Running     0          42m

References

@Ramblurr
Copy link

This was helpful, thanks for sharing. The same setup also works for microk8s!

The only addition for microk8s is to enable the fuse and nesting features with pct set $VMID --features fuse=1,nesting=1

@davosian
Copy link

Hi @triangletodd, The very first step mentions to make sure that the following modules are loaded: cat /proc/sys/net/bridge/bridge-nf-call-iptables. When I run the cat command, I am getting 0 returned. Does this mean that the module is not loaded? If so, how can I get it loaded? Can I simply set this to 1?

@triangletodd
Copy link
Author

triangletodd commented Feb 4, 2021

Hi @triangletodd, The very first step mentions to make sure that the following modules are loaded: cat /proc/sys/net/bridge/bridge-nf-call-iptables. When I run the cat command, I am getting 0 returned. Does this mean that the module is not loaded? If so, how can I get it loaded? Can I simply set this to 1?

I believe this article should explain what you need. I've since foolishly migrated off of Proxmox, but I'm getting ready to migrate back. TL;DR I think you just need to modify your /etc/sysctl.conf. I will fact check this document as I migrate back to Proxmox and let you know for sure.

https://wiki.libvirt.org/page/Net.bridge.bridge-nf-call_and_sysctl.conf

Also, my apologies for the late reply as I have yet to solve my Github notification noise problem.

@davosian
Copy link

davosian commented Feb 4, 2021

Highly appreciated, thanks Todd!

I am actually thinking of going bare metal for my cluster in an effort to reduce complexity and the additional maintenance. May I ask why you have turned around?

Should I go down this rabbit hole, I can give you feedback on your guide as well, I will make sure to provide feedback myself. Just yesterday, I have set up two NUCs with Proxmox (now I have 3 in total). Currently I am checking my options for moving towards Kubernetes...

@triangletodd
Copy link
Author

triangletodd commented Mar 30, 2021

Highly appreciated, thanks Todd!

I am actually thinking of going bare metal for my cluster in an effort to reduce complexity and the additional maintenance. May I ask why you have turned around?

Should I go down this rabbit hole, I can give you feedback on your guide as well, I will make sure to provide feedback myself. Just yesterday, I have set up two NUCs with Proxmox (now I have 3 in total). Currently I am checking my options for moving towards Kubernetes...

I would only recommend against bare metal as proxmox and other bare metal hypervisors allow you the freedom to wipe your environment and start fresh very easily, it also allows you to easily spin up parallel environments and tinker. Obviously you can accomplish the same thing with docker compose or LXD/C on bare metal, but my personal tinkering time is limited and I quite enjoy having an interface to turn knobs and glance at. I would likely never use proxmox or esxi in a professional setting. K8S on cloud infra has been serving me for years there.

@triangletodd
Copy link
Author

I am actually thinking of going bare metal for my cluster in an effort to reduce complexity and the additional maintenance. May I ask why you have turned around?

I stopped using proxmox because I had four hosts in a cluster and I found their clustering solution to be annoying and unnecessary for my use case. I migrated back simply for the UI and steered clear of the clustering features. There may be a better tool for what I’m doing, but I haven’t found it yet. Maybe even something as simple as webmin. Part of the reason I migrated back was not wanting to waste time worrying about this aspect of my environment and having more time to focus on k8s and other development things that interest me.

@davosian
Copy link

I would only recommend against bare metal as proxmox and other bare metal hypervisors allow you the freedom to wipe your environment and start fresh very easily, it also allows you to easily spin up parallel environments and tinker. Obviously you can accomplish the same thing with docker compose or LXD/C on bare metal, but my personal tinkering time is limited and I quite enjoy having an interface to turn knobs and glance at. I would likely never use proxmox or esxi in a professional setting. K8S on cloud infra has been serving me for years there.

Absolutely valuable insight. Thanks for sharing!

@davosian
Copy link

I stopped using proxmox because I had four hosts in a cluster and I found their clustering solution to be annoying and unnecessary for my use case. I migrated back simply for the UI and steered clear of the clustering features. There may be a better tool for what I’m doing, but I haven’t found it yet. Maybe even something as simple as webmin. Part of the reason I migrated back was not wanting to waste time worrying about this aspect of my environment and having more time to focus on k8s and other development things that interest me.

I am trying to use Ansible to automate my infrastructure (partly to learn it, partly with the intention to also easily be able to start fresh). That being said, I would miss out of Proxmox features like snapshotting or setting up additonal environments on the same machines.

@cmonty14
Copy link

Hello,
I follow the instructions, but I get this error when executing k3sup:

$ k3sup install --ip 192.168.100.110 --user root
Running: k3sup install
2021/04/29 21:51:19 192.168.100.110
Public IP: 192.168.100.110
[INFO]  Finding release for channel v1.19
[INFO]  Using v1.19.10+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.19.10+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.19.10+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s
Result: [INFO]  Finding release for channel v1.19
[INFO]  Using v1.19.10+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.19.10+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.19.10+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
[INFO]  systemd: Starting k3s
 Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.

Error: error received processing command: Process exited with status 127

Can you please advise how to fix this error?

@triangletodd
Copy link
Author

I stopped using proxmox because I had four hosts in a cluster and I found their clustering solution to be annoying and unnecessary for my use case. I migrated back simply for the UI and steered clear of the clustering features. There may be a better tool for what I’m doing, but I haven’t found it yet. Maybe even something as simple as webmin. Part of the reason I migrated back was not wanting to waste time worrying about this aspect of my environment and having more time to focus on k8s and other development things that interest me.

I am trying to use Ansible to automate my infrastructure (partly to learn it, partly with the intention to also easily be able to start fresh). That being said, I would miss out of Proxmox features like snapshotting or setting up additonal environments on the same machines.

Ansible has community support for Proxmox and I've used it in the past. See: https://docs.ansible.com/ansible/latest/collections/community/general/proxmox_module.html

@triangletodd
Copy link
Author

Hello,
I follow the instructions, but I get this error when executing k3sup:

$ k3sup install --ip 192.168.100.110 --user root
Running: k3sup install
2021/04/29 21:51:19 192.168.100.110
Public IP: 192.168.100.110
[INFO]  Finding release for channel v1.19
[INFO]  Using v1.19.10+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.19.10+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.19.10+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s
Result: [INFO]  Finding release for channel v1.19
[INFO]  Using v1.19.10+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.19.10+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.19.10+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
[INFO]  systemd: Starting k3s
 Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.

Error: error received processing command: Process exited with status 127

Can you please advise how to fix this error?

I would need the journald logs in order to assist. They would give more insight into what's causing the 127 exit code.

@djw4
Copy link

djw4 commented Jul 13, 2021

I am actually thinking of going bare metal for my cluster in an effort to reduce complexity and the additional maintenance. May I ask why you have turned around?

I stopped using proxmox because I had four hosts in a cluster and I found their clustering solution to be annoying and unnecessary for my use case. I migrated back simply for the UI and steered clear of the clustering features. There may be a better tool for what I’m doing, but I haven’t found it yet. Maybe even something as simple as webmin. Part of the reason I migrated back was not wanting to waste time worrying about this aspect of my environment and having more time to focus on k8s and other development things that interest me.

Perhaps check out harvester - that might be more what you're looking for? As you have an interest in k8s, considering the underlying 'hypervisor' is built on it, it may provide some interesting options for you to experiment with.

@clumbo
Copy link

clumbo commented Aug 17, 2021

How did you add disks to this, I see pool as empty

@fpragana
Copy link

the k3sup Installation is in host (PROXMOX), LXC (k3s) ou local station with rsa private key??

@ericp-us
Copy link

ericp-us commented Nov 5, 2021

I am actually thinking of going bare metal for my cluster in an effort to reduce complexity and the additional maintenance. May I ask why you have turned around?

I stopped using proxmox because I had four hosts in a cluster and I found their clustering solution to be annoying and unnecessary for my use case. I migrated back simply for the UI and steered clear of the clustering features. There may be a better tool for what I’m doing, but I haven’t found it yet. Maybe even something as simple as webmin. Part of the reason I migrated back was not wanting to waste time worrying about this aspect of my environment and having more time to focus on k8s and other development things that interest me.

Perhaps check out harvester - that might be more what you're looking for? As you have an interest in k8s, considering the underlying 'hypervisor' is built on it, it may provide some interesting options for you to experiment with.

Thanks, I'm on PVE and use Kubernetes with Rancher and this looks promising

@davegallant
Copy link

If you're running a Proxmox VE 7.0, it has switched to a pure cgroupv2 environment.

Updating /etc/pve/lxc/$ID.conf to:

lxc.cgroup2.devices.allow: c 10:200 rwm

and then installing the k3s 1.22 stream worked for me:

k3sup install --ip $CONTAINER_IP --user root --k3s-version v1.22.3+k3s1

@tabnul
Copy link

tabnul commented Dec 16, 2021

Nice guide but i dont see why the first step would be needed. it comes from the wireguard/wireshark reference link where i can understand that you want that.

i think you need to check for the following modules

  • overlay
  • br_netfilter

there are some issues with this , k3s logs errors on those 2 modules even IF they are loaded correctly . just search the proxmox forums on that.
Process: 123 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=1/FAILURE)
Process: 129 ExecStartPre=/sbin/modprobe overlay (code=exited, status=1/FAILURE)

above log comes from a functioning k3s cluster @lxc following your guide. it logs it on startup of k3s server..

@ajvpot
Copy link

ajvpot commented Jan 13, 2022

I got it to launch with

lxc.apparmor.profile: unconfined
lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.cap.drop:
lxc.mount.auto: cgroup:rw:force

but now I'm stuck at

kubelet.go:1423] "Failed to start ContainerManager" err="[open /proc/sys/vm/overcommit_memory: read-only file system, open /proc/sys/kernel/panic: read-only file system, open /proc/sys/kernel/panic_on_oops: read-only file system]"

@TopheC
Copy link

TopheC commented May 8, 2022

the k3sup Installation is in host (PROXMOX), LXC (k3s) ou local station with rsa private key??

From https://github.com/alexellis/k3sup, you must the k3sup setup from your local station with your private key.

@wuyue92tree
Copy link

I install k3s success under lxc, but I got wrong metrics.

The cpu/memory result is not correct which belong to physical host but not lxc.

kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"


{"kind":"NodeMetricsList","apiVersion":"metrics.k8s.io/v1beta1","metadata":{},"items":[{"metadata":{"name":"k3s-node-1","creationTimestamp":"2022-09-29T13:00:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/instance-type":"k3s","beta.kubernetes.io/os":"linux","egress.k3s.io/cluster":"true","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"k3s-node-1","kubernetes.io/os":"linux","node-role.kubernetes.io/control-plane":"true","node-role.kubernetes.io/master":"true","node.kubernetes.io/instance-type":"k3s"}},"timestamp":"2022-09-29T13:00:14Z","window":"1m0s","usage":{"cpu":"1316m","memory":"17094732Ki"}},{"metadata":{"name":"k3s-node-2","creationTimestamp":"2022-09-29T13:00:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/instance-type":"k3s","beta.kubernetes.io/os":"linux","egress.k3s.io/cluster":"true","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"k3s-node-2","kubernetes.io/os":"linux","node.kubernetes.io/instance-type":"k3s"}},"timestamp":"2022-09-29T13:00:14Z","window":"1m0s","usage":{"cpu":"952m","memory":"16942476Ki"}},{"metadata":{"name":"k3s-node-3","creationTimestamp":"2022-09-29T13:00:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/instance-type":"k3s","beta.kubernetes.io/os":"linux","egress.k3s.io/cluster":"true","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"k3s-node-3","kubernetes.io/os":"linux","node.kubernetes.io/instance-type":"k3s"}},"timestamp":"2022-09-29T13:00:14Z","window":"1m0s","usage":{"cpu":"888m","memory":"16932068Ki"}}]}

@jmturner
Copy link

[INFO] systemd: Starting k3s
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.

Error: error received processing command: Process exited with status 127

Forgive me for reviving an old thread but for those who still need this answer:

k3s is getting confused starting as root as it does not know it's in an unprivileged container and thus things aren't working. Add the following lines to ExecStart in the systemd file /etc/systemd/systemd/k3s.service (and then run systemctl daemon-reload) to get k3s running. This answer comes from k3s-io issue 4249.:

--kubelet-arg=feature-gates=KubeletInUserNamespace=true \
--kube-controller-manager-arg=feature-gates=KubeletInUserNamespace=true \
--kube-apiserver-arg=feature-gates=KubeletInUserNamespace=true \

@rsik
Copy link

rsik commented Feb 15, 2023

k3s is getting confused starting as root as it does not know it's in an unprivileged container and thus things aren't working. Add the following lines to ExecStart in the systemd file /etc/systemd/systemd/k3s.service (and then run systemctl daemon-reload) to get k3s running. This answer comes from k3s-io issue 4249.:

Small correction from your post /etc/systemd/systemd/k3s.service -> /etc/systemd/system/k3s.service .

I used k3sup as normal (and received the 127 error) and then edited the service file (as above) then ran:

k3sup install --skip-install --host host.domain.tld --sudo=false --user root --ssh-key ~/.ssh/ssh_key

and all is well!

I am not sure if there is a way to pass a service file on the first install, but this worked for me - thank you @jmturner

@ky-bd
Copy link

ky-bd commented Jun 19, 2023

I managed to start k3s in an unprivileged LXC container. I added the following to the CT conf file (also don't forget to check unprivileged container, or set unprivileged: 1 in the config):

lxc.cap.drop:
lxc.apparmor.profile: unconfined
lxc.mount.auto: proc:rw sys:rw cgroup:rw
lxc.cgroup2.devices.allow: c 10:200 rwm

modprobe / lsmod for br_netfilter might fail because it's already compiled into the kernel, rather than a loadable kernel module. You can check this by grep 'BRIDGE_NETFILTER' /boot/config-$(uname -r) . The overlay module needs to be loaded at Proxmox host side as well.

Then use k3sup to install (I'm actually installing and joining the k3s server to an existing cluster; modify this command as you need):

k3sup.exe join --host host.example.com --user root --ssh-key path_to_key --server-ip xxx.xxx.xxx.xxx --server --sudo=false

k3sup would report that installation succeeded, though the k3s server didn't start properly. Add these options to /etc/systemd/system/k3s.service ( https://gist.github.com/triangletodd/02f595cd4c0dc9aac5f7763ca2264185?permalink_comment_id=4466758#gistcomment-4466758 )

--kubelet-arg=feature-gates=KubeletInUserNamespace=true \
--kube-controller-manager-arg=feature-gates=KubeletInUserNamespace=true \
--kube-apiserver-arg=feature-gates=KubeletInUserNamespace=true \

Then restart k3s:

systemctl daemon-reload
systemctl restart k3s

Now the k3s should be running fine, and no need for running k3sup a second time. You can check it with systemctl status k3s and kubectl get nodes -A -o wide.

@glassman81
Copy link

I managed to start k3s in an unprivileged LXC container. I added the following to the CT conf file (also don't forget to check unprivileged container, or set unprivileged: 1 in the config):

lxc.cap.drop:
lxc.apparmor.profile: unconfined
lxc.mount.auto: proc:rw sys:rw cgroup:rw
lxc.cgroup2.devices.allow: c 10:200 rwm

I was able to use unprivileged containers too, but I'm not sure cgroup:rw is necessary. I didn't use it, but, everything seems to be working.

@glassman81
Copy link

glassman81 commented Jul 10, 2023

I was able to use unprivileged containers too, but I'm not sure cgroup:rw is necessary. I didn't use it, but, everything seems to be working.

Scratch that. There are too many apps that error out when trying to do this unprivileged, like rancher.

2023/07/10 06:33:16 [INFO] Applying CRD machinesets.cluster.x-k8s.io
2023/07/10 06:33:23 [FATAL] error running the jail command: exit status 2

Privileged works though.

@ky-bd
Copy link

ky-bd commented Jul 11, 2023

I was able to use unprivileged containers too, but I'm not sure cgroup:rw is necessary. I didn't use it, but, everything seems to be working.

Scratch that. There are too many apps that error out when trying to do this unprivileged, like rancher.

2023/07/10 06:33:16 [INFO] Applying CRD machinesets.cluster.x-k8s.io
2023/07/10 06:33:23 [FATAL] error running the jail command: exit status 2

Privileged works though.

Yeah, I found that unprivilegd LXC failed to mount block devices, so Longhorn and probably other CSI driver won't work. I gave it up and just turned to VMs though.

@glassman81
Copy link

I was able to use unprivileged containers too, but I'm not sure cgroup:rw is necessary. I didn't use it, but, everything seems to be working.

Scratch that. There are too many apps that error out when trying to do this unprivileged, like rancher.

2023/07/10 06:33:16 [INFO] Applying CRD machinesets.cluster.x-k8s.io
2023/07/10 06:33:23 [FATAL] error running the jail command: exit status 2

Privileged works though.

Yeah, I found that unprivilegd LXC failed to mount block devices, so Longhorn and probably other CSI driver won't work. I gave it up and just turned to VMs though.

I'm having the same problem even with privileged LXCs. Longhorn goes through this process of constantly attaching/detaching when the frontend is block device. When it's iSCSI, it doesn't even attempt to attach, though I think that's because the CSI driver doesn't support iSCSI mode.

Did you ever get longhorn to work with privileged LXCs, or it just didn't work all around?

@glassman81
Copy link

Well, it seems in its current state, longhorn won't work with LXCs:

longhorn/longhorn#2585
longhorn/longhorn#3866

This is not to say that it can't, just that someone hasn't figured it out yet. Maybe if someone like @timothystewart6 is interested (hopefully), he can have a go at it. His pretty awesome work led me here in the first place, so I can only hope.

@ky-bd
Copy link

ky-bd commented Jul 14, 2023

Well, it seems in its current state, longhorn won't work with LXCs:

longhorn/longhorn#2585 longhorn/longhorn#3866

This is not to say that it can't, just that someone hasn't figured it out yet. Maybe if someone like @timothystewart6 is interested (hopefully), he can have a go at it. His pretty awesome work led me here in the first place, so I can only hope.

I read those issues before, and that's part of the reason why I gave up before trying privileged LXC.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment