Overview
Installation notes during setup of 3 node RHHI systems. A lot of manual stuff here that can eventually(?) be automated.
Setup
Install RHVH with default configuration on sda
only. Wipe the other disks with:
for i in a b c d e f g; do wipefs -a -f /dev/sd$i; done
Subscribe
Subscribe the nodes to subscription-manager
.
subscription-manager register
subscription-manager list --pool-only --available --matches="..."
subscription-manager attach --pool=...
Install and Setup dnsmasq
This might not be necessary if you have DNS setup everywhere, but we're going to run dnsmasq on our nfvha-strg-02 node.
yum install dnsmasq -y
Configure and setup dnsmasq per https://github.com/redhat-nfvpe/telemetry-framework/blob/master/docs/02-server_side_installation.md#dnsmasq-configuration
Configuration I used for these nodes:
cat > /etc/dnsmasq.conf <<EOF
log-facility=/var/log/dnsmasq.log
listen-address=10.19.110.9
bind-interfaces
cache-size=300
server=10.16.36.29
server=10.11.5.19
server=10.5.30.160
server=127.0.0.1
no-hosts
addn-hosts=/etc/hosts.dnsmasq
address=/apps.dev.nfvpe.site/10.19.110.101
no-resolv
EOF
cat > /etc/hosts.dnsmasq <<EOF
10.19.110.80 engine.dev.nfvpe.site
10.19.110.101 master.dev.nfvpe.site
10.19.110.101 console.dev.nfvpe.site
10.19.110.101 openshift-master.dev.nfvpe.site
10.19.110.102 openshift-node-1.dev.nfvpe.site
10.19.110.103 openshift-node-2.dev.nfvpe.site
10.19.111.101 nfvha-strg-01.storage.dev.nfvpe.site
10.19.111.102 nfvha-strg-02.storage.dev.nfvpe.site
10.19.111.103 nfvha-strg-03.storage.dev.nfvpe.site
EOF
Modify the /etc/NetworkManager/NetworkManager.conf
to not control the /etc/resolv.conf
on the nodes
so that they point only at 10.19.110.9
(dnsmasq server).
cat /etc/NetworkManager/NetworkManager.conf | grep -v '#'
[main]
dns=none
[logging]
Then restart NetworkManager.service
systemctl restart NetworkManager.service
Modify your /etc/resolv.conf
cat > /etc/resolv.conf <<EOF
search oot.lab.eng.bos.redhat.com storage.dev.nfvpe.site
nameserver 10.19.110.9
EOF
Setup the firewall on your dnsmasq server (10.19.110.9
) to expose the DNS ports (53/tcp|udp).
firewall-cmd --zone=public --permanent --add-service dns
systemctl restart dnsmasq.service
Setup SSH keys for configuration
On your first host (where the engine will be installed), you need to create an SSH key on the host
and then ssh-copy-id
to the local hostname of that node (e.g. nfvha-strg-01.oot.lab.eng.bos.redhat.com
),
along with the other nodes (e.g. nfvha-strg-02.oot.lab.eng.bos.redhat.com
and
nfvha-strg-03.oot.lab.eng.bos.redhat.com
).
On nfvha-strg-01.oot.lab.eng.bos.redhat.com:
ssh-keygen
for i in 1 2 3; do ssh-copy-id -i ~/.ssh/id_rsa.pub root@nfvha-strg-0$i.oot.lab.eng.bos.redhat.com; done
Install Hyperconverged Infrastructure
Now you're going to start building RHHI. Login to the first node at port 9090 with the root
user and
start the configuration under Virtualization > Hosted Engine > Hyperconverged > Start.
- Run Gluster Wizard
- Hosts
- Setup Host1, Host2, Host3 (e.g.
nfvha-strg-01.storage.dev.nfvpe.site
) - Next
- Setup Host1, Host2, Host3 (e.g.
- FQDNs
- Add Host2 and Host3 addresses (e.g.
nfvha-strg-02.oot.lab.eng.bos.redhat.com
) - Next
- Add Host2 and Host3 addresses (e.g.
- Volumes
- Next
- Bricks
- Raid type: JBOD (just a bunch of disks)
- Device Name
- engine sdb
- data sdd
- vmstore sde
- Sizes
- engine 400
- data 1800
- vmstore 1800
- Next
- Review
- Reload
- Deploy
Then follow the instructions to the Engine installation per https://doc-stage.usersys.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.5/html-single/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/#deploy-he-cockpit-ver150
- Set the static IP of
engine.dev.nfvpe.site
to10.19.110.80
- Set your passwords
- Make sure your
engine.dev.nfvpe.site
validates - Click through and let engine installation finish.
At this point check on your first node that /etc/resolv.conf
hasn't reverted.
I had this happen and it causes issues with the Finish step when the engine and
all the storage is getting setup. Without /etc/resolv.conf
pointing at 10.19.110.9
the final step will fail.
At the Storage section, specify the IP addresses of the storage connections. These
will be 10.19.110.80:/engine
for the Storage Connection (points at the engine), and
then the backup volfile-servers will be 10.19.110.9
and 10.19.110.11
(IP addresses of
the other 2 host servers).
Click on Finish Deployment and wait a while for it to finish. :fingerscrossed: