Skip to content

Instantly share code, notes, and snippets.

@shannonmitchell
Last active March 16, 2022 18:01
Show Gist options
  • Save shannonmitchell/f5727282517c78d45ec26cd331b80fca to your computer and use it in GitHub Desktop.
Save shannonmitchell/f5727282517c78d45ec26cd331b80fca to your computer and use it in GitHub Desktop.
Quick libvirt pcs lab setup

Quick libvirt pcs lab setup

To keep things simple, I'm using my local fedora 29 install with libvirt. We will also use virtualbmc to act as an ipmi interface to allow stonith with pcs. For a quicker install, I'm just using a cloud image of Centos7 with a manual configdrive iso for each vm. We will use a couple of VMs for the cluster and a 3rd for the iscsi server. I have recently found myself dealing with OSP13 which uses PCS and ISCSI, so it will be a refresher for me as well as its been a few years.

Yes we could script and/or ansible all of this shit out, but this is a learning exercise. I'm also not worried about the details, just the meat of it all. We could get lost in details for weeks on any part of this.

Prep the base image

We will download a cloud image of CentOS 7 and increase the default image size to 10G to cut down on future headaches. We can use it as a base image to quickly spin up vms with snapshots. We have many ways to go about it, this is just one. This is also assuming that libvirt, virsh, virt-manage and related tools are installed on the fedora box.

wget -4 https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2 \
  -O /var/lib/libvirt/images/CentOS-7-x86_64-GenericCloud.qcow2
  
qemu-img resize /var/lib/libvirt/images/CentOS-7-x86_64-GenericCloud.qcow2 10G

virsh pool-refresh default

virsh vol-list default

Generate an SSH key for VM access

We are generating the key now as its needed for the cloud image access. We will be using the public key in the configdrive iso for cloud-init.

ssh-keygen -f ~/.ssh/pcstest -N ''

Generate a config drive for 3 hosts

for VMNAME in storage node01 node02; do

  cat <<EOF > ./user-data
#cloud-config

groups:
  - pcsuser

users:
  - name: pcsuser
    gecos: PCS User
    primary_group: pcsuser
    groups: wheel
    sudo: ALL=(ALL) NOPASSWD:ALL
    lock_password: true
    ssh_authorized_keys:
      - $(cat ~/.ssh/pcstest.pub)
EOF

  cat <<EOF > ./meta-data
instance-id: ${VMNAME}
local-hostname: ${VMNAME}
EOF

    genisoimage -volid cidata -input-charset utf-8 -joliet -rock \
      -output /var/lib/libvirt/images/configdrive-${VMNAME}.iso  user-data meta-data

done

virsh pool-refresh default

Generate the OS volumes from the base image

for VMNAME in storage node01 node02; do

  qemu-img create -f qcow2 \
    -b /var/lib/libvirt/images/CentOS-7-x86_64-GenericCloud.qcow2 \
    /var/lib/libvirt/images/${VMNAME}.img.snap

done

virsh pool-refresh default

Build the VMs

We are not doing anything special with these. The default network with libvirt is usually 192.168.122.0/24 and sets up ip addresses on the vms from this range. Cloud-init will look for the cidata volume id we set when creating the iso images for its configdrive configuration data. In a real environment we might separate the kick, cluster, storage and ipmi networks depending on the environment.

for VMNAME in storage node01 node02; do

  virt-install -n $VMNAME --vcpus 2 -r 2048 -w network=default \
    --disk vol=default/${VMNAME}.img.snap,format=qcow2 --import \
    --disk vol=default/configdrive-${VMNAME}.iso,device=cdrom \
    --noautoconsole

done

Monitor the build

Running virt-manager as your gnome user should allow you to see the builds and make sure things are going well. It should be a quick startup with the cloud images.

virt-manager

Gathering some info and testing connectivity

Here we will just use the libvirt tools to see what addresses were given and test that cloud-init did its job and set up the pcsuser and keys.

for NODE in $(virsh list | awk '/running/{print $2}'); do echo "#### ${NODE}: ####"; echo ; virsh domifaddr $NODE; done
#### storage: ####

 Name       MAC address          Protocol     Address
-------------------------------------------------------------------------------
 vnet0      52:54:00:7b:6f:48    ipv4         192.168.122.223/24

#### node01: ####

 Name       MAC address          Protocol     Address
-------------------------------------------------------------------------------
 vnet1      52:54:00:9e:4d:51    ipv4         192.168.122.48/24

#### node02: ####

 Name       MAC address          Protocol     Address
-------------------------------------------------------------------------------
 vnet2      52:54:00:88:c8:85    ipv4         192.168.122.167/24


for LOCT in 223 48 167; do ssh -o StrictHostKeyChecking=no -i ~/.ssh/pcstest pcsuser@192.168.122.${LOCT} "hostname"; done
storage
node01
node02

Set up virtualbmc

In order to test stonith we can use virtualbmc with libvirt to set up an ipmi interface much like you would use when connecting to HP iLO or Dell Drac. I am just going to set up a quick python virtualenv to install python-virtualbmc and set up two ipmi interfaces for node01 and node02. The default libvirt network uses virbr0 with a gateway address of 192.168.122.1. As long as the service is listening on that ip, we should be able to connect to it from each node for stonith.

Each vm has to have its own port number.

dnf install libvirt-devel python-virtualenv
virtualenv ~/pcslab
. ~/pcslab/bin/activate
(pcslab) [root ~]# pip install libvirt-python virtualbmc

(pcslab) [root ~]# vbmc add node01 --port 1111 --username admin --password beansandrice
(pcslab) [root ~]# vbmc add node02 --port 1112 --username admin --password beansandrice
(pcslab) [root ~]# vbmc start node01
(pcslab) [root ~]# vbmc start node02
(pcslab) [root ~]# vbmc list
+-------------+---------+---------+------+
| Domain name | Status  | Address | Port |
+-------------+---------+---------+------+
| node01      | running | ::      | 1111 |
| node02      | running | ::      | 1112 |
+-------------+---------+---------+------+

(pcslab) [root ~]# ss -nutpl | grep 111[12]
udp    UNCONN   0        0                       *:1111                 *:*      users:(("vbmc",pid=9847,fd=29))                                                
udp    UNCONN   0        0                       *:1112                 *:*      users:(("vbmc",pid=9922,fd=29))  

From the above output, you can see that both services are up and listening on all interfaces. Lets test restarting the nodes from each other.


# Log into node01 and install ipmitool
(pcslab) [root ~]# ssh -i ~/.ssh/pcstest pcsuser@192.168.122.48
[pcsuser@node01 ~]$ sudo su -
[root@node01 ~]# yum -y install ipmitool

# Test rebooting node02 from node01 (virt-manager might be a good tool to watch the fun)
[root@node01 ~]# ipmitool -I lanplus -U admin -P beansandrice -H 192.168.122.1 -p 1112 power reset

# Test rebooting node01 from node01 (it will lock your session and kick you after the boot.)
[root@node01 ~]# ipmitool -I lanplus -U admin -P beansandrice -H 192.168.122.1 -p 1111 power reset

# Log into node02 and install ipmitool
(pcslab) [root ~]# ssh -i ~/.ssh/pcstest pcsuser@192.168.122.167
[pcsuser@node02 ~]$ sudo su -
[root@node02 ~]# yum -y install ipmitool

# Test rebooting node01 from node02 (virt-manager might be a good tool to watch the fun)
[root@node02 ~]# ipmitool -I lanplus -U admin -P beansandrice -H 192.168.122.1 -p 1111 power reset

# Test rebooting node02 from node02 (it will lock your session and kick you after the boot.)
[root@node02 ~]# ipmitool -I lanplus -U admin -P beansandrice -H 192.168.122.1 -p 1112 power reset

Set up the iscsi server

(pcslab) [root ~]# ssh -i ~/.ssh/pcstest pcsuser@192.168.122.223
[pcsuser@storage ~]$ sudo su -
[root@storage ~]# yum install -y targetcli
[root@storage ~]# systemctl enable target
[root@storage ~]# systemctl start target


[root@storage ~]# targetcli

/> backstores/fileio/ create mysql /opt/mysql.img 1G
Created fileio mysql with size 1073741824
/> backstores/fileio/ create nfs /opt/nfs.img 1G
Created fileio nfs with size 1073741824

/> iscsi/ create iqn.2019-04.com.riceandbeans:t1
Created target iqn.2019-04.com.riceandbeans:t1.
Created TPG 1.
Global pref auto_add_default_portal=true
Created default portal listening on all IPs (0.0.0.0), port 3260.

/> cd iscsi/iqn.2019-04.com.riceandbeans:t1/tpg1/

/iscsi/iqn.20...beans:t1/tpg1> luns/ create /backstores/fileio/mysql
Created LUN 0.
/iscsi/iqn.20...beans:t1/tpg1> luns/ create /backstores/fileio/nfs
Created LUN 1.

/iscsi/iqn.20...beans:t1/tpg1> acls/ create iqn.2019-04.com.riceandbeans:client
Created Node ACL for iqn.2019-04.com.riceandbeans:client
Created mapped LUN 1.
Created mapped LUN 0.
/iscsi/iqn.20...beans:t1/tpg1> cd acls/iqn.2019-04.com.riceandbeans:client/
/iscsi/iqn.20...dbeans:client> set auth userid=admin
Parameter userid is now 'admin'.
/iscsi/iqn.20...dbeans:client> set auth password=riceandbeans
Parameter password is now 'riceandbeans'.

/iscsi/iqn.20...dbeans:client> exit
Global pref auto_save_on_exit=true
Configuration saved to /etc/target/saveconfig.json

Set up the iscsi clients

Log into each node and do the following.

vi /etc/hosts
192.168.122.223 storage
192.168.122.48 node01
192.168.122.167 node02

yum -y install iscsi-initiator-utils

echo 'InitiatorName=iqn.2019-04.com.riceandbeans:client' > /etc/iscsi/initiatorname.iscsi

cat <<EOF >> /etc/iscsi/iscsid.conf
node.session.auth.authmethod = CHAP
node.session.auth.username = admin
node.session.auth.password = riceandbeans
EOF

systemctl start iscsi

iscsiadm --mode discovery --type sendtargets --portal storage
    192.168.122.223:3260,1 iqn.2019-04.com.riceandbeans:t1

iscsiadm --mode node --targetname iqn.2019-04.com.riceandbeans:t1 --portal 192.168.122.223:3260 --login

lsblk --scsi | grep iscsi
    sdb  2:0:0:0    disk LIO-ORG  mysql            4.0  iscsi
    sdc  2:0:0:1    disk LIO-ORG  nfs              4.0  iscsi

yum -y install lvm2

Run this on only one device

pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created.
pvcreate /dev/sdc
  Physical volume "/dev/sdc" successfully created.
  
vgcreate mysqlVG /dev/sdb
  Volume group "mysqlVG" successfully created
vgcreate nfsVG /dev/sdc
  Volume group "nfsVG" successfully created

lvcreate -l 100%FREE -n mysqlLV mysqlVG
  Logical volume "mysqlLV" created.
lvcreate -l 100%FREE -n nfsLV nfsVG
  Logical volume "nfsLV" created.

mkfs.ext4 /dev/mapper/mysqlVG-mysqlLV 
mkfs.ext4 /dev/mapper/nfsVG-nfsLV

lvchange -an /dev/mapper/nfsVG-nfsLV 
lvchange -an /dev/mapper/mysqlVG-mysqlLV 

Log into the other device and make sure it can see the volumes.

pvscan
  PV /dev/sdc   VG nfsVG           lvm2 [1020.00 MiB / 0    free]
  PV /dev/sdb   VG mysqlVG         lvm2 [1020.00 MiB / 0    free]
  Total: 2 [1.99 GiB] / in use: 2 [1.99 GiB] / in no VG: 0 [0   ]

vgscan
  Reading volume groups from cache.
  Found volume group "nfsVG" using metadata type lvm2
  Found volume group "mysqlVG" using metadata type lvm2

lvscan
  inactive          '/dev/nfsVG/nfsLV' [1020.00 MiB] inherit
  inactive          '/dev/mysqlVG/mysqlLV' [1020.00 MiB] inherit

Setting up a PCS cluster

Run the following on both nodes.

yum -y install pcs pacemaker fence-agents-all
passwd hacluster
systemctl enable pcsd.service; systemctl start pcsd.service

Auth both nodes to pcs from a single node using the hacluster pass you just set.

pcs cluster auth node01 node02

Set up the main cluster from a single node.

pcs cluster setup --start --name cluster01 node01 node02
pcs cluster enable --all
pcs cluster status

Set up fencing to use the virtualbmc devices created earlier.

pcs stonith create ipmilan_node01_fencing fence_ipmilan pcmk_host_list=node01 delay=5 ipaddr=192.168.122.1 ipport=1111 login=admin passwd=beansandrice lanplus=1 op monitor interval=60s

pcs stonith create ipmilan_node02_fencing fence_ipmilan pcmk_host_list=node02 delay=5 ipaddr=192.168.122.1 ipport=1112 login=admin passwd=beansandrice lanplus=1 op monitor interval=60s

pcs property set stonith-enabled=true
pcs stonith show

# test both of them out
pcs stonith fence node02
pcs stonith fence node01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment