Skip to content

Instantly share code, notes, and snippets.

@mrpeardotnet
Created March 14, 2019 09:53
Show Gist options
  • Star 20 You must be signed in to star a gist
  • Fork 4 You must be signed in to fork a gist
  • Save mrpeardotnet/547aecb041dbbcfa8334eb7ffb81d784 to your computer and use it in GitHub Desktop.
Save mrpeardotnet/547aecb041dbbcfa8334eb7ffb81d784 to your computer and use it in GitHub Desktop.
PVE-Multipath

Installing multipath tools on PVE Cluster with shared storage

This cheatsheet shows how to install and configure multipath tools on Proxmox PVE Cluster where multiple nodes share single storage with multipath configuration, for example SAN storage connected to each of the nodes by two independent paths.

Proxmox PVE version

This cheatsheet has been tested on Proxmox 5.x.

Note about sudo

I do not prepend sudo command to any of commands listed here, but keep in mind that nearly all commands requires su privileges, so use sudo if your account happen to not have root access.

How to edit config files

The simplest way to edit config file is to use vim.tiny editor, for example to edit vzdump.conf file use this command:

vim.tiny /etc/vzdump.conf

Or just use any of your favourite editors (I use mc a lot, too).

Installation

Do system update

Do system update using PVE GUI or using command line:

apt-get update && apt-get dist-upgrade && apt-get autoremove

Installing multipath tools

To install multipath-tools use this command:

apt-get install multipath-tools

Create config file

Multipath daemon detects configuration automatically even without any config file, but is is highly recommended to create specific configuration file to fine tune some of the configuration parameters. Create config file at /etc/multipath.conf.

defaults {
    udev_dir                /dev
    polling_interval        2
    selector                "round-robin 0"
    path_grouping_policy    multibus
    prio_callout            none
    path_checker            readsector0
    rr_min_io               100
    rr_weight               priorities
    failback                immediate
    user_friendly_names     yes
}

blacklist {
    devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
    devnode "^(hd|xvd)[a-z]*"
    wwid    "3600508b1001c8873afd5b5e0fbc39149" # specific disk
}

In section blacklist you need to specify drives that you want to be ignored by multipath daemon, for example it is good idea to ignore PVE boot drives.

If you want to get WWID of the /dev/sda drive, run this command:

/lib/udev/scsi_id -g -u -d /dev/sda

Use returned value to blacklist the disk.

Restarting service

To restart multipath tools service use this command:

systemctl restart multipath-tools.service

Showing multipath topology configuration

To see actual multipath topology configuration use:

multipath -ll

Configuration output should look like the following example, where you can see two multipath devices mpathb (279GB) and mpatha (3.3TB). Both multipath devices are made of two underlying physical drives (connections): sde and sdc for mpathb, sdb and sdd for mpatha.

mpathb (3600c0ff000155c6998cec05001000000) dm-6 HP,P2000 G3 SAS
size=279G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 2:0:1:2 sde 8:64 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  `- 2:0:0:2 sdc 8:32 active ready running
mpatha (3600c0ff000155bf66113db5b01000000) dm-5 HP,P2000 G3 SAS
size=3.3T features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 2:0:0:1 sdb 8:16 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  `- 2:0:1:1 sdd 8:48 active ready running

Show device mapper logical volumes setup

You can inspect multipath configuration also using device mapper setup tool. To show device mapper setup as a tree view, use:

dmsetup ls --tree

The output should look like this:

pve-swap (253:0)
 └─ (8:3)
pve-root (253:1)
 └─ (8:3)
pve-data (253:8)
 ├─pve-data_tdata (253:7)
 │  └─ (8:3)
 └─pve-data_tmeta (253:6)
    └─ (8:3)
mpatha-part1 (253:5)
 └─mpatha (253:2)
    ├─ (8:48)
    └─ (8:16)
mpathb-part1 (253:4)
 └─mpathb (253:3)
    ├─ (8:32)
    └─ (8:64)

You can see here that both multipath devices mpatha and mpathb have two paths to two underlying devices.

Get multipath status or debug info

Show multipath status (default verbosity):

multipath -v2

Show multipath debug info:

multipath -v3

Accessing multipath devices

Multipath devices are mapped to /dev/mapper/, for example:

  /dev/mapper/mpatha
  /dev/mapper/mpathb

Create LVM

Now we have ready multipath device(s) and it is time to create LVM on them so we can use those device as LVM storage in Proxmox PVE.

Preparing disks (wipe out old partition data):

If you want to wipe out existing partition data on device /dev/sdX, run this command (use with caution!):

dd if=/dev/zero of=/dev/sdX bs=512 count=1 conv=notrunc

You can wipe out partition data on multipath device, too:

dd if=/dev/zero of=/dev/mapper/mpathX bs=512 count=1 conv=notrunc

Add partition devmappings

kpartx -a /dev/mapper/mpathX

Create linux LVM partitions using fdisk:

fdisk /dev/mapper/mpathX

On fdisk prompt, type in following command in specific order (g create a new empty GPT partition table, n create new partition, p primary partition, Enter default partition number, Enter default first sector, Enter default last sector, t set partition type, 8e Linux LVM partition, wq write and quit):

g
n
p
Enter
Enter
Enter
t
8e
wq

Create LVM physical volume

Now we can create LVM physical volume that will use partition (mpathX-part1) created by fdisk in previous step.

pvcreate /dev/mapper/mpathX-part1

List LVM physical volumes To list physical volumes availbe use this command:

pvscan

Create LVM volume group

Now we can create LVM volume group on physical volume createt previously. Choose name that you like.

vgcreate <NAME> /dev/mapper/mpathX-part1

List LVM volume groups To list volume groups available use this command:

vgscan

Note: LVM volume groups can be now easily used using Proxmox PVE GUI.

Create LVM filter

LVM scans all devices on startup for LVM configuration and beacuse we configured multipath it will find LVM configuration both on (virtual) multipath devices and on underlying (physical) drives. So it is good idea to create LVM filter to filter out physical drives and allow LVM to scan multipath devices only.

Edit /etc/lvm/lvm.conf and add this section somewhere under filter examples which are already present in config file by default (be sure to comment them out). In this example we accept multipath devices only and additionally accepting /dev/sda.* (where Proxmox PVE itself is installed), all other devices are rejected:

    # PVE (match in following order)
    #  - accept /dev/mapper devices (multipath devices are here)
    #  - accept /dev/sda device (created by PROXMOX installed on EXT4)
    #  - reject all other devices
    filter = [ "a|/dev/mapper/|", "a|/dev/sda.*|", "r|.*|" ]  

Important note: by last rule we are filtering out (rejecting) all devices, that do not match previous rules (accept), so it is very important to include (accept) PVE boot device if the system is installed on LVM (and tha it true by defualt when installed on EXT4)! Otherwise system will become unbootable!

Update initramfs

After editing LVM filter you need to update initramfs, to do so run this command.

update-initramfs -u

Reboot the node and see that no duplications are reported by LVM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment