Skip to content

Instantly share code, notes, and snippets.

@scyto
Last active December 26, 2024 14:36
Show Gist options
  • Save scyto/76e94832927a89d977ea989da157e9dc to your computer and use it in GitHub Desktop.
Save scyto/76e94832927a89d977ea989da157e9dc to your computer and use it in GitHub Desktop.
my proxmox cluster

ProxMox Cluster - Soup-to-Nutz

aka what i did to get from nothing to done.

note: these are designed to be primarily a re-install guide for myself (writing things down helps me memorize the knowledge), as such don't take any of this on blind faith - some areas are well tested and the docs are very robust, some items, less so). YMMV

Purpose of Proxmox cluster project

Required Outomces of cluster project

image

The first 3 NUCs are the new proxmox cluster, the second set of 3 NUCs is the old Hyper-V nodes.

Updates as of 9/30/2023 This cluster is no longer a PoC and is my production cluster for all my VMs and docker containers (in VM based swarm).

All my initial objectives have been achivied and then some. All VMs migrated from Hyper-V and working - despite some stupidty on my part - though i learnt a lot!)

I will update if and when i make major changes, redesign or add new capabilities, but to be clear i now consider this gist set complete for my needs and have no more edits planned.

If you spot a critical type let me know and I can change but as as these are notes for me (not a tutorial) i make no promises :-)

Outcomes

  1. Hardware and Base Proxmox Install

  2. Thunderbolt Mesh Networking Setup

  3. Enable OSPF Routing On Mesh network - deprecated - old gist here

  4. Enable Dual Stack (IPv4 and IPv6) Openfabric Routing on Mesh Network

  5. Setup Cluster

  6. Setup Ceph and High Availability

  7. Create CephFS and storage for ISOs and CT Templates

  8. Setup HA Windows Server VM + TPM

  9. How to migrate Gen2 Windows VM from Hyper-V to Proxmox

    1. Notes on migrating my real world domain controller #2
    2. Notes on migrating my real world domain controller #1 (FSMO holder, AAD Sync and CA server)
    3. Notes on migrating my windows (server 2019) admin center VM
  10. Migrate HomeAssistant VM from Hyper-V

  11. Migrate my debian VM based docker swarm from Hyper-V to proxmox

  12. Extra Credit (optional):

    1. Enable vGPU Passthrough (+windows guest, CT guest configs
    2. Install Lets Encrypt Cert (CloudFlare as DNS Provder
    3. Azure Active Directory Auth
    4. Install Proxmox Backup Server (PBS) on synology with CIFS backend
    5. Send email alerts via O365 using Postfix HA Container
  13. Random Notes & Troubleshootig

TODO

  • add TLS to the mail relay? with LE certs? maybe?
  • maybe send syslog to my syslog server (securely)
  • figure out ceph public/cluster running on different networks - unclear its needed for this size of install
  • get all nodes listening to my network UPS and shut down before power runs out
  • For the docker VMs implement both cephfs via virtiofs for and a cephs docker volume and test which i like best in a swarm - using this ceph volume guide and this mounting guide by Drallas - using one of these three ceph volume plugins Brindster/docker-plugin-cephfs flaviostutz/cepher n0r1sk/docker-volume-cephfs each has different strengths and weaknesses (i will like choose either the n0r1sk or the Brindster one).

Purpose of cluster

I have been using Hyper-V for my docker swarm cluster VM hosts (see other gists). Original intenttion was to try and get Thunderbolt Networking for a Hyper-V cluster going and clustered storage for the VMs. This turns out to be super hard when using NUCs as cluster nodes due to too few disks. I looked at solar winds as alternative but this was both complex and not pervasive.

I had been watching proxmox for years and thought now was a good time to jump in and see what it is all about. (i had never booted or looked at proxmox UI before doing this - so this documentation is soup to nuts and intended for me to repro if needed)

Goals of Cluster

  1. VMs running on clustered storage {completed}
  2. Use of ThunderBolt for ~26Gbe Cluster VM operations (replication, failover etc)
    • Thunderbolt meshs with OSPF routing {completed}
    • Ceph over thunderbolt mesh {completed}
    • VM running with live migration {completed}
    • VM running with HA failove of node failure {completed}
    • Seperate VM/CT Migration network over thunderbolt mesh {not started}
  3. Use low powered off the shelf Intel NUCs {completed}
  4. Migrate VMs from Hyper-V:
    • Windows Server Domain Controler / DNS / DHCP / CA / AAD SYNC VMs {not started}
    • Debian Dcoker Host (for my 3 running 3 node swarm) VMs {not started}
    • HomeAssistant VM {not started}
  5. Sized to last me 5+ years (lol, yeah, right)

Hardware Selected

  1. 3x 13th Gen Intel NUCs (NUC13ANHi7):
    • Core i7-1360P Processor(12 Cores, 5.0 GHz, 16 Threads)
    • Intel Iris Xe Graphics
    • 64 GB DDR4 3200 CL22 RAM
    • Samsung 870 EVO SSD 1TB Boot Drive
    • Samsung 980 Pro NVME 2 TB Data Drive
    • 1x Onboard 2.5Gbe LAN Port
    • 2x Onboard Thunderbolt4 Ports
    • 1 x 2.5Gbe usinng Intel NUCIOALUWS nvme epxansion port
  2. 3 x OWC TB4 Cables

Key Software Components Used

  1. Proxmox v8.x
  2. Ceph (included with Proxmox)
  3. LLDP (included with Proxmox)
  4. Free Range Routing - FRR OSPF - (included with Proxmox)
  5. nano ;-)

Key Resources Leveraged

Proxmox/Ceph Guide from packet pushers

Proxmox Forum - several community members were invaluable in providing me a breadcrumb trail.

systemd.link manual pages

udevadm manual

udev manual

@scyto
Copy link
Author

scyto commented Dec 11, 2023

yup spec:

3 Electrical Layer
3.1 On-Board Re-timers
An On-Board Re-timer shall implement an Electrical Layer as defined in the USB4 Specification with the 
following changes:
• An On-Board Re-timer shall support Gen 2 speed of 10Gbps. Support for other speeds is optional.
• An On-Board Re-timer shall support two Lanes

and

3.2 Cable Re-timers
A Cable Re-timer shall meet the requirements in the USB4 Specification with the following changes:
• A Cable Re-timer shall support Gen 2 speed of 10Gbps and Gen 3 speed of 20Gbps.
• A Cable Re-timer shall support two Lane

and from wiki

USB4 products must support 20 Gbit/s throughput and can support 40 Gbit/s throughput

as such 40 gbps seems optional.... and given current DMA controllers limits real world to 26Gbps - no wonder many won't bother supporting more than 20 and saving some money...

this is why i only buy true TB4 certified hardware - it requires the 40gbps.

@scyto
Copy link
Author

scyto commented Dec 29, 2023

hosts interconnect only at 20 Gbps not the expected 40 Gbps

This is the min spec of USB4, 40gbps is optional on USB4. on TB4 40Gbps is required. This is why i tell people to be very careful when selecting USB4 hardware - TB4 guarantees the superset of USB4 specs.

@scyto
Copy link
Author

scyto commented Dec 29, 2023

Very cool that you did it with the AMD over USB4! How is the latency?

definitely, nice to know its working!

@scyto
Copy link
Author

scyto commented Dec 29, 2023

the specs of these mini-pc's clearly state that USB4 provides 40 Gbps

i couldn't find that claim anywhere on their website (i see resellers making the claim, but not beelink) It might be worth contacting bee-link and asking, maybe they have a USB4 BIOS issue....

@DarkPhyber-hg
Copy link

I am planning on setting this up once I get my mini PCs, question for you, 26gbe is that per port? Or aggregate? If you run iperf3 from node a to nodes b and c at the same time, do they both get 26gbps each?

@rlabusiness
Copy link

rlabusiness commented Apr 3, 2024

@scyto Thank you for sharing your depth and breadth of experience here!

A few questions for you:

  1. Would you mind sharing what BIOS/firmware version you're running on your Intel NUC 13 Pro i7's?
  2. Did you update your firmware before installing?
  3. Have you had any experience with different firmwares on these NUCs?
  4. Is there a version that you'd recommend?

I just bought 3 of the exact same model as you in order to replicate your build (after I experienced an issue with a single NUC 13 Pro i5 running Proxmox that caused me some major headaches).

@rlabusiness
Copy link

@scyto Well - I'm responding to my own question here. Haha.

I was googling for more information on the BIOS firmware that came on all 3 of the NUCs that arrived this week (ANRPL357.0026.2023.0314.1458), and I quickly found the Proxmox forum thread below where you mention that this is the stable BIOS version you stuck with in your build. I'm thrilled about that!

https://forum.proxmox.com/threads/intel-nuc-13-pro-thunderbolt-ring-network-ceph-cluster.131107/post-582678

The last question I have before I do my deep dive is about which version of Proxmox to install. I recall reading in one of the 20 or so threads I've read that a later kernel version (6.5?) causes issues. So I'll be researching that a bit more before starting, but if you (or anyone) wants to provide a shortcut to a solid recommendation there, it would be much appreciated.

@scyto
Copy link
Author

scyto commented Apr 4, 2024

yes you need the proxmox kernel version 6.2.16-14-pve or higher to ensure when nodes power cycle the mesh doesn't break and to enable IPv6 correctly

to be clear I haven't upgraded my kernel beyond 6.2.16-14-pve - so i haven't tested to ensure nothing else has broken since then, so let me know if you hit any issues

@scyto
Copy link
Author

scyto commented Apr 4, 2024

fatal last words, all nodes now on 6.5.13-3-pve everything seems fine

of course install from 6.5 might be a different bag if there are setup issues so YMMV

@rlabusiness
Copy link

@scyto - That’s great! Thanks for taking the plunge to validate 6.5. I’ll plan to start with the latest PVE installer and won’t shy away from updating. I’ll also keep an eye out for any anomalies in the process and will report back either way.

Unfortunately I’m traveling at the moment, so I won’t be able to get this built out until next week, but I’m even more excited now. If you notice anything strange with 6.5 over the next few days, please share; otherwise, I hope my next report will be one of success!

@Allistah
Copy link

Allistah commented May 9, 2024

First off, thanks so very much for putting this guide together - really appreciate it! I had a question now that you have had your setup running for some time now. You installed a 1TB SSD as the boot drive and a 2TB NVMe drive for the VMs. How many VMs are. you running and how is your free space looking today? Was the 1TB boot drive too much? I'm curious if a 512GB SSD would have been plenty or not. Once I get two more NUC 13 Pros, I'm going to start over and give this guide a try from the ground up! I currently have the 13 Pro and two old MacBook Pros as a cluster but replication and migrations are tough since it's over a 1Gb network and node 2-3 only have 16GB of ram. Thanks again - really looking forward to trying this out!

@SchuFire
Copy link

Greetings and thank you for this write up.

I am working on proving this out on a three NUC cluster. I have the network up and running. However, after reboots, sometimes the routing is set-up wonky where node1 routes through node2 to get to node3 even though node1 and node3 are directly connected. Was wondering if anyone has seen this behavior and how it can be addressed. I can get the routes correct but it takes some time restarting thunderbolt ports and/or restarting frr.

Thanks in advance.

Steve

@Akkadius
Copy link

Akkadius commented Jul 29, 2024

Thank you for writing this up and sharing this with the world. Definitely enjoyed your post.

Do you have a benchmark of a Linux VM running on the ceph? Would be good to see what the performance is like on a VM for disk

curl -sL yabs.sh | bash -s -- -i -g -n

@chrissi5120
Copy link

what do you guys think about intel z890 with native thunderbolt 4?
This setup comes in with a bigger footprint but is more versatile..

Sadly, i cant find anything about tb4 networking on these boards, but being "native tb4" they should do the trick?

@scyto what is your opinion if i may ask?

@scyto
Copy link
Author

scyto commented Nov 21, 2024

@chrissi5120 short version is you want to get a board that uses a "Software Connection Manager" these are the only ones capable of cross domain channel bonding to get the 26Gbps through put (40gbe reported connection).

long version
I have had a support email thread with ASUS for the last 6 months trying to get sense out of them about which motherboards do and don't have that. For example I have prove that all Z790 motherboards used a hardware connection manager and discrete thunderbolt chip. This was stupid as the 13th and 14th gen processors have the needed controllers on the chipset/CPU to do software connection manager. I paid $1000 for my maximus extreme, imagine how annoyed i am that it isn't full TB4/USB4 compliant.

I have asked for a list of their new Z890 motherboards, the reply wasn't clear if they use a SW connection manager or not. I give the ones that use a thunderbolt 4/5 add-in cards the lowest chance of having have a SW connection manager.

If you know someone running one of the board with latest windows 11 its easy to tell - in device manager they will see USB4 router devices and in the setting apps there will be a new page about USB4 domains.

I won't be buying a z890 based system until i have evidence the board runs a software connection manager.

whats wild is ASUS refuse to believe me (despite sending them copius evidence) that their motherboards cannnot do 40Gbe connection speed for peer to peer networking over thunder bolt. I am pretty damn annoyed at them. I really need someone at level1tech or nexus to go get sense out of them.

All devices that use mobile chipsets and cpus (NUCs, MS01, ZimaCubePro etc all seem to have full TB4 support).

Hope that helps you dodge a bullet. Or buy a a z890, test it, report back and send for refund if it doesn't support SW CM ;-)

@chrissi5120
Copy link

@chrissi5120 short version is you want to get a board that uses a "Software Connection Manager" these are the only ones capable of cross domain channel bonding to get the 26Gbps through put (40gbe reported connection).

long version I have had a support email thread with ASUS for the last 6 months trying to get sense out of them about which motherboards do and don't have that. For example I have prove that all Z790 motherboards used a hardware connection manager and discrete thunderbolt chip. This was stupid as the 13th and 14th gen processors have the needed controllers on the chipset/CPU to do software connection manager. I paid $1000 for my maximus extreme, imagine how annoyed i am that it isn't full TB4/USB4 compliant.

I have asked for a list of their new Z890 motherboards, the reply wasn't clear if they use a SW connection manager or not. I give the ones that use a thunderbolt 4/5 add-in cards the lowest chance of having have a SW connection manager.

If you know someone running one of the board with latest windows 11 its easy to tell - in device manager they will see USB4 router devices and in the setting apps there will be a new page about USB4 domains.

I won't be buying a z890 based system until i have evidence the board runs a software connection manager.

whats wild is ASUS refuse to believe me (despite sending them copius evidence) that their motherboards cannnot do 40Gbe connection speed for peer to peer networking over thunder bolt. I am pretty damn annoyed at them. I really need someone at level1tech or nexus to go get sense out of them.

All devices that use mobile chipsets and cpus (NUCs, MS01, ZimaCubePro etc all seem to have full TB4 support).

Hope that helps you dodge a bullet. Or buy a a z890, test it, report back and send for refund if it doesn't support SW CM ;-)

ah man... thank you so much for your quick feedback. i was just about to be a smartass and buy last-gen intel hardware on ebay..
I will definitely not buy z890 based on your explanation and go for NUC-like hardware, propably exactly with your setup..

The thing is, i wanted to save some bucks in the beginning and expand later with more and better additional hardware / a better upgrade-path.

I think your reasoning has a specifically smart angle for someone from Germany: Currently, I pay 32 euro cents per kWh and based on that fact alone, a NUC-Setup will save me a lot of money in the long run.

Thank you very much again. Stuff like this is just pure gold for the community.

@chrissi5120
Copy link

just one more "low-cost" idea..

AM4/5 Mainboard with G-Series CPU(GPU integration)
up to 128GByte memory
PCIE Networkcard like Dell EMC Broadcom BCM 57414 which is said to just sip power at around 5W
DAC-Cabling

this might just come out much cheaper but has of cource a giant footprint compared to your solution.

the guide you provided could be forked and reused 99% for this kind of setup i think?

Power consumption would be "much" higher but still manageable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment