Skip to content

Instantly share code, notes, and snippets.

@jistr
Last active August 29, 2015 14:22
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save jistr/aad0d802506a61e60c3f to your computer and use it in GitHub Desktop.
Save jistr/aad0d802506a61e60c3f to your computer and use it in GitHub Desktop.
TripleO fencing config for fence_xvm
#!/bin/bash
MACHINE_REGEX=${MACHINE_REGEX:-baremetal}
FENCE_XVM_KEY=${FENCE_XVM_KEY:-$(cat /etc/cluster/fence_xvm.key)}
MULTICAST_ADDRESS=${MULTICAST_ADDRESS:-$(grep address /etc/fence_virt.conf | head -n1 | awk -F'"' '{ print $2}')}
if [ -z "$FENCE_XVM_KEY" ]; then
echo 'ERROR: fence_xvm key not set' 1>&2
echo '$FENCE_XVM_KEY is empty and /etc/cluster/fence_xvm.key does not exist / cannot be read / is empty' 1>&2
exit 1
fi
if [ -z "$MULTICAST_ADDRESS" ]; then
echo 'ERROR: multicast address not set' 1>&2
echo '$MULTICAST_ADDRESS is empty and trying to read it from /etc/fence_virt.conf did not work' 1>&2
exit 1
fi
MACHINES=$(virsh list --all | grep "$MACHINE_REGEX" | awk '{print $2}')
echo "{ \"devices\": ["
MACHINE_COUNT=$(wc -l <<< "$MACHINES")
MACHINE_NUM=0
for MACHINE in $MACHINES; do
MACHINE_NUM=$((MACHINE_NUM + 1))
MACHINE_MAC=$(virsh dumpxml $MACHINE | grep 'mac address' | awk -F"'" '{print $2}')
echo " {"
echo " \"agent\": \"fence_xvm\","
echo " \"host_mac\": \"$MACHINE_MAC\","
echo " \"params\": {"
echo " \"multicast_address\": \"$MULTICAST_ADDRESS\","
echo " \"port\": \"$MACHINE\","
echo " \"manage_fw\": true,"
echo " \"manage_key_file\": true,"
echo " \"key_file\": \"/etc/fence_xvm.key\","
echo " \"key_file_password\": \"$FENCE_XVM_KEY\""
echo " }"
echo " }$([ \"$MACHINE_COUNT\" = \"$MACHINE_NUM\" ] || echo -n ',')"
done
echo "]}"

TripleO virtualized deployment -- fence_xvm config

  1. Prepare for fence_xvm multicast traffic

By default, overcloud has no direct connection to host and multicast traffic will not pass through undercloud, which prevents fence_xvm from working. This needs to be worked around somehow:

Option A: connecting the host machine directly to br-ctlplane

  • Instead of talking to the undercloud and overcloud through libvirt's default network, we'll talk to br-ctlplane directly via brbm on the host machine. This will drop your connections to undercloud and overcloud and you'll need to re-establish them. Set up the routing on the host machine:
ip addr add 192.0.2.100/24 dev brbm
ip link set brbm up
# ^ this will automatically set up a route like
# 192.0.2.0/24 dev brbm  proto kernel  scope link  src 192.0.2.100

# now you need to delete the original route through default libvirt network
ip route del 192.0.2.0/24 via 192.168.122.244 dev virbr0
# ATTENTION: you'll have a different IP here ^

Option B: multicast forwarding on undercloud

  • On undercloud, install smcroute for forwarding multicast traffic between overcloud and host
wget ftp://troglobit.com/smcroute/smcroute-2.0.0.tar.xz
xz -d smcroute-2.0.0.tar.xz
tar -xf smcroute-2.0.0.tar
cd smcroute-2.0.0
./configure
make
  • Configure and run smcroute in foreground (e.g. in a separate tmux/screen window)
echo '
mgroup from eth0 group 225.0.0.12
mroute from eth0 group 225.0.0.12 to br-ctlplane

mgroup from br-ctlplane group 225.0.0.12
mroute from br-ctlplane group 225.0.0.12 to eth0
' > smcroute.conf
./smcroute -d -n -f smcroute.conf
  1. Configure and run fence_virtd on baremetal

  • Install and configure fence_virtd:
yum -y install fence-virtd-libvirt fence-virtd-multicast
fence_virtd -c
# use the defaults
# multicast address 225.0.0.12, port 1229
# ATTENTION when selecting network interface:
# * if you connected baremetal to br-ctlplane (option A), use brbm
# * if you went with multicast forwarding previously (option B), use virbr0
  • Configure fence_xvm secret:
mkdir /etc/cluster
echo -n "abcdef" > /etc/cluster/fence_xvm.key
  • Run fence_virtd in debug mode in foreground to see what it does (e.g. in a separate tmux/screen window)
fence_virtd -F -d99
  1. Add fencing parameters to your overcloud stack

  • Add fencing parameters to your custom heat environment file ($OVERCLOUD_CUSTOM_HEAT_ENV). Use the generate-fencing-config-xvm.sh script to create the JSON value for FencingConfig parameter.
parameters:
  EnableFencing: true
  FencingConfig: ##### here belongs output of generate-fencing-config-xvm.sh #####

Now you can deploy overcloud and it should be configured with fence_xvm devices for all controllers and monitoring should report that the machines are started.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment