Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save gushmazuko/9208438b7be6ac4e6476529385047bbb to your computer and use it in GitHub Desktop.
Save gushmazuko/9208438b7be6ac4e6476529385047bbb to your computer and use it in GitHub Desktop.

Proxmox VE Installation on Hetzner Server via Rescue System

Follow these steps to install Proxmox VE on a Hetzner server via the Rescue System. The Rescue System is a Linux-based environment that can be booted into to perform system recovery tasks. We'll be using it to install Proxmox VE.

In order to complete the process, it is indeed necessary to first boot into the Rescue System and then connect to it via SSH. This will allow you to run the commands for installing Proxmox VE. Here are the steps:

Starting the Rescue System

  1. Log into the Hetzner Robot.
  2. Under "Main Functions; Server" select the desired server and then open the tab "Rescue".
  3. Here, you can activate the desired variant (choose Linux 64bit).
  4. You will be given a password which you will use to login as "root" via SSH.

Restarting the Server

  1. To load the Rescue System, the server must be restarted.
  2. If you no longer have access to the server, you can use the reset function in the Robot. You will find this under the "Reset" tab of the desired server.

Note: The activation of the Rescue System is only valid for one boot. If you want to boot your server to the Rescue System again, you will have to activate it in the Hetzner Robot again. If you do not reboot your server within 60 minutes after the activation, the scheduled boot of the Rescue System will automatically become inactive. If the server is restarted later, the system will boot from the internal drive(s).

Connecting to the Rescue System via SSH

  1. Once the server has been restarted and the Rescue System is active, you can connect to it via SSH.
  2. You can connect using the following command, replacing <SERVER_IP> with the IP address of your server:
    ssh root@<SERVER_IP>
  3. You will be prompted to enter the password you were given when you activated the Rescue System. After entering the password, you should be logged in as the root user on your server running the Rescue System.

After connecting to the Rescue System via SSH, you can then proceed with the steps for installing Proxmox VE as outlined in this guide.

1. Fetch the Proxmox VE ISO

First, retrieve the ISO image of the latest Proxmox VE version. Execute the following commands in the Rescue System:

ISO_VERSION=$(curl -s 'http://download.proxmox.com/iso/' | grep -oP 'proxmox-ve_(\d+.\d+-\d).iso' | sort -V | tail -n1)
ISO_URL="http://download.proxmox.com/iso/$ISO_VERSION"
curl $ISO_URL -o /tmp/proxmox-ve.iso

2. Acquire Network Configuration

Obtain the network configuration, i.e., the network interface name, IP address, CIDR, and gateway:

INTERFACE_NAME=$(udevadm info -q property /sys/class/net/eth0 | grep "ID_NET_NAME_PATH=" | cut -d'=' -f2)
IP_CIDR=$(ip addr show eth0 | grep "inet\b" | awk '{print $2}')
GATEWAY=$(ip route | grep default | awk '{print $3}')
IP_ADDRESS=$(echo "$IP_CIDR" | cut -d'/' -f1)
CIDR=$(echo "$IP_CIDR" | cut -d'/' -f2)

3. Initiate QEMU and Start Proxmox Installation

Kick off QEMU for Proxmox installation using the downloaded ISO:

# Get the primary and secondary disks
PRIMARY_DISK=$(lsblk -dn -o NAME,SIZE,TYPE -e 1,7,11,14,15 | sed -n 1p | awk '{print $1}')
SECONDARY_DISK=$(lsblk -dn -o NAME,SIZE,TYPE -e 1,7,11,14,15 | sed -n 2p | awk '{print $1}')

# Kick off QEMU with CDROM
qemu-system-x86_64 -daemonize -enable-kvm -m 10240 \
-hda /dev/$PRIMARY_DISK \
-hdb /dev/$SECONDARY_DISK \
-cdrom /tmp/proxmox-ve.iso -boot d -vnc :0,password -monitor telnet:127.0.0.1:4444,server,nowait

# Set VNC password
echo "change vnc password <VNC_PASSWORD>" | nc -q 1 127.0.0.1 4444

Replace <VNC_PASSWORD> with your desired VNC password. Now, you can access the Proxmox VE installation interface via a VNC viewer at 'YourIPAddress:5900' using the password you set above.

4. Stop QEMU and Reboot into Proxmox VE

After you've finished the Proxmox installation manually, stop QEMU:

# Stop QEMU
printf "quit\n" | nc 127.0.0.1 4444

5. Configure QEMU Startup After Installation

Once you've finished the Proxmox installation manually, you need to restart QEMU without the CDROM:

# Kick off QEMU without CDROM
qemu-system-x86_64 -daemonize -enable-kvm -m 10240 \
-hda /dev/$PRIMARY_DISK \
-hdb /dev/$SECONDARY_DISK \
-vnc :0,password -monitor telnet:127.0.0.1:4444,server,nowait \
-net user,hostfwd=tcp::2222-:22 -net nic

# Set VNC password
echo "change vnc password <VNC_PASSWORD>" | nc -q 1 127.0.0.1 4444

6. Transfer Network Configuration and Update Nameserver

Now, transfer your network configuration to your Proxmox VE system:

cat > /tmp/proxmox_network_config << EOF
auto lo
iface lo inet loopback

iface $INTERFACE_NAME inet manual

auto vmbr0
iface vmbr0 inet static
  address $IP_ADDRESS/$CIDR
  gateway $GATEWAY
  bridge_ports $INTERFACE_NAME
  bridge_stp off
  bridge_fd 0
EOF

# transfer the network configuration file to Proxmox VE system
sshpass -p "<ROOT_PASSWORD>" scp -o StrictHostKeyChecking=no -P 2222 /tmp/proxmox_network_config root@localhost:/etc/network/interfaces

# update the nameserver
sshpass -p "<ROOT_PASSWORD>" ssh -o StrictHostKeyChecking=no -p 2222 root@localhost "sed -i 's/nameserver.*/nameserver 1.1.1.1/' /etc/resolv.conf"

Replace <ROOT_PASSWORD> with the root password for your Proxmox system.

7. Gracefully Shutdown QEMU

After you've finished any necessary manual configurations via VNC or SSH, you can gracefully shut down QEMU:

printf "system_powerdown\n" | nc 127.0.0.1 4444

8. Reboot into Proxmox VE

Lastly, you'll need to reboot your Hetzner Rescue System into Proxmox VE:

shutdown -r now

After the reboot, your Proxmox VE system should be up and running. You can access the Proxmox VE interface at https://<YourIPAddress>:8006.

@Avedena
Copy link

Avedena commented Mar 8, 2024

For some reason the installation does not create a clean ZFS system.
I start qemu with this command:

qemu-system-x86_64 -enable-kvm -smp 4 -m 4096 -boot d \
 -cdrom /root/proxmox-ve_8.1-2.iso \
 -drive file=/dev/nvme0n1,format=raw,media=disk,if=virtio \
 -drive file=/dev/nvme1n1,format=raw,media=disk,if=virtio \
 -drive file=/dev/nvme2n1,format=raw,media=disk,if=virtio \
 -drive file=/dev/nvme3n1,format=raw,media=disk,if=virtio \
 -vnc 127.0.0.1:1

I also tested the drives to install with -hda /dev/nvme{0-3}n1. I can boot up the Proxmox VE installation in qemu, but if i restart the Server it didn't boot into Proxmox VE.

If I boot again into Hetzner rescue there are no zfs pools:

root@rescue ~ # zfs list
no datasets available

root@rescue ~ # zpool list
no pools available

My server is equipped with 4 SAMSUNG MZQLB1T9HAJR-00007 NVMe-SSD:

root@rescue ~ # lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
loop0     7:0    0  3.1G  1 loop
nvme1n1 259:0    0  1.7T  0 disk
nvme0n1 259:1    0  1.7T  0 disk
nvme3n1 259:2    0  1.7T  0 disk
nvme2n1 259:3    0  1.7T  0 disk
root@rescue ~ # nvme list
Node                  Generic               SN                   Model                                    Namespace Usage                      Format           FW Rev
--------------------- --------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme3n1          /dev/ng3n1            S439NE0N123456       SAMSUNG MZQLB1T9HAJR-00007               1           1.54  GB /   1.92  TB      4 KiB +  0 B   EDA5502Q
/dev/nvme2n1          /dev/ng2n1            S439NE0N123456       SAMSUNG MZQLB1T9HAJR-00007               1           1.46  GB /   1.92  TB      4 KiB +  0 B   EDA5502Q
/dev/nvme1n1          /dev/ng1n1            S439NE0N123456       SAMSUNG MZQLB1T9HAJR-00007               1           1.46  GB /   1.92  TB      4 KiB +  0 B   EDA5502Q
/dev/nvme0n1          /dev/ng0n1            S439NE0N123456       SAMSUNG MZQLB1T9HAJR-00007               1           1.54  GB /   1.92  TB      4 KiB +  0 B   EDA5502Q

@gushmazuko
Copy link
Author

gushmazuko commented Mar 8, 2024

@Avedena If I'm not mistaken, to see the ZFS pool on a Rescue System, you first need to import it using the command:

zpool import

Are you able to boot into the installed Proxmox OS?

@gushmazuko
Copy link
Author

I've also developed an Ansible role aimed at fully automating the setup process, which I plan to publish on GitHub shortly.

@Avedena
Copy link

Avedena commented Mar 8, 2024

@gushmazuko No, I am not able to boot into Proxmox outside of qemu.

root@rescue ~ # zpool import
no pools available to import

edit:
After installing Proxmox with the following qemu command, I can see the pool inside the rescue system.
But it still can not boot. It looks like I need to adjust the device ID and serial numbers to get the pool recognized.

qemu-system-x86_64 -daemonize -enable-kvm -smp 4 -m 4096 -boot d -D /root/qemu.log \
-cdrom /root/proxmox-ve_8.1-2.iso \
-device nvme,drive=nvme0n1,serial=${realDeviceSN} \
-drive file=/dev/nvme0n1,if=none,id=nvme0n1 \
-device nvme,drive=nvme1n1,serial=${realDeviceSN} \
-drive file=/dev/nvme1n1,if=none,id=nvme1n1 \
-device nvme,drive=nvme2n1,serial=${realDeviceSN} \
-drive file=/dev/nvme2n1,if=none,id=nvme2n1 \
-device nvme,drive=nvme3n1,serial=${realDeviceSN} \
-drive file=/dev/nvme3n1,if=none,id=nvme3n1 \
-monitor telnet:127.0.0.1:4444,server,nowait \
-vnc 127.0.0.1:1
root@rescue ~ # zpool import
   pool: rpool
     id: 14994337035133382658
  state: UNAVAIL
status: The pool was last accessed by another system.
 action: The pool cannot be imported due to damaged devices or data.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:

        rpool        UNAVAIL  insufficient replicas
          raidz1-0   UNAVAIL  insufficient replicas
            nvme1n1  UNAVAIL  invalid label
            nvme3n1  UNAVAIL  invalid label
            nvme2n1  UNAVAIL  invalid label
            nvme0n1  UNAVAIL  invalid label

To be clear: I am still able to boot Proxmox in qemu. And the pool is without known data errors. Only if I boot Proxmox on bare metal, I Stucks.

edit2:
I get closer. I ordered a KVM Console a try to install it natively.

In the setup GUI I get the following error after selecting the filesystem and press next:

Warning: Booting from 4kn drive in legacy BIOS mode is not supported.

Please fix ZFS setup first.

The Solution is: You need to change the boot mode from LAGACY to UEFI to use 4kn drives.

@Souldiv
Copy link

Souldiv commented Jun 15, 2024

So, I too had the issue where running the above qemu commands just did not let proxmox boot with zfs. I realized because my system doesn’t have CSM and had only UEFI. When you run the above qemu commands it installs proxmox with legacy bios as bootmgr in partition, you can verify this by running efibootmgr.

Coming to the solution:

hetzner server should have ovmf but if not install here: sudo apt-get install ovmf

and then you can install proxmox using the following:

qemu-system-x86_64 -daemonize -enable-kvm -m 10240 -k en-us \
-drive file=/dev/$PRIMARY_DISK,format=raw,media=disk,if=virtio,id=$PRIMARY_DISK \
-drive file=/dev/$SECONDARY_DISK,format=raw,media=disk,if=virtio,id=$SECONDARY_DISK \
-drive file=/usr/share/OVMF/OVMF_CODE.fd,if=pflash,format=raw,readonly=on \
-drive file=/usr/share/OVMF/OVMF_VARS.fd,if=pflash,format=raw \
-cdrom /tmp/proxmox-ve.iso -boot d \
-vnc :0,password=on -monitor telnet:127.0.0.1:4444,server,nowait

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment