Skip to content

Instantly share code, notes, and snippets.

@acsulli
Last active April 18, 2022 04:40
Show Gist options
  • Star 4 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save acsulli/256e58b33c2fd2d257babcaa8d7ae1f4 to your computer and use it in GitHub Desktop.
Save acsulli/256e58b33c2fd2d257babcaa8d7ae1f4 to your computer and use it in GitHub Desktop.
RHV all-in-one

RHV AIO Install for Lab

This, loosely, documents installing RHV as an all-in-one server. This is not supported and has some flakiness, particularly for updates. Additionally, because it's a lab, no "real" storage was used.

The Server

The physical server used for this has 8 core, 32GB RAM, and a 512GB NVMe drive connected to the network using a single 1 GbE link. You'll need at least 200GiB of storage to comfortably host more than a couple of VMs.

Install and configure

Before beginning, make sure you have forward and reverse DNS working for the hostnames and IPs you're using for both the hypervisor host and RHV-M. E.g.:

  • 10.0.101.20 = rhvm.lab.com
  • 10.0.101.21 = rhv01.lab.com
  1. Install RHEL

    I'm using RHEL, not RHV-H, because it's easier to manage and add pacakges (such as an NFS server). I'm going to assume that you've verified the CPU virtualization extensions, etc. have been enabled via BIOS/UEFI and have configured whatever storage you're using for OS install. If you're using a single, shared drive like I am, I highly recommend allocating about 40GiB for the RHEL OS and reserving the remainder for VM storage domains.

    After installing RHEL 7.7, register and attach it to the appropriate pool.

  2. Add the needed repos and update, install cockpit

    Following the docs here, enable the repos and update the host.

    # enable the needed repos
    subscription-manager repos \
     --disable='*' \
     --enable=rhel-7-server-rpms \
     --enable=rhel-7-server-rhv-4-mgmt-agent-rpms \
     --enable=rhel-7-server-ansible-2-rpms
    
    # update everything
    yum -y update
    
    # install cockpit with the various add-ons
    yum -y install cockpit-ovirt-dashboard
    
    # enable cockpit and open the firewall
    systemctl enable cockpit.socket
    firewall-cmd --permanent --add-service=cockpit
    
    # since a Kernel update probably got installed, reboot the host.  If not, start cockpit and skip this reboot.
    reboot
  3. Configure host storage and mangement network

    If you have a fancy storage setup for VM storage (RAID0,1,5,6,10; ZFS; whatever) now is the time to do it. Same for any network config (bond, etc.) needed for management (VM networks come later) that wasn't done pre-install.

    My host, with a single 512GB NVMe drive was configured to give 40GiB to the RHEL operating system. Using Cockpit, I configured the remaining 430ish GiB for LVM. In the VG I created a thin pool, which has two (thin) volumes:

    • rhv_she - 100GiB
    • rhv_data - 350GiB

    These volumes are formatted using XFS and mounted to /mnt/rhv_she and /mnt/rhv_data respectively. Last, but not least, set permissions: chown 36:36 /mnt/*.

    I'm using NFS to create the illusion of shared storage, just in case I have a second+ host later.

    # create the exports file
    cat << EOF > /etc/exports
    /mnt/rhv_she 10.0.101.0/24(rw,async,no_root_squash)
    /mnt/rhv_data 10.0.101.0/24(rw,async,no_root_squash)
    EOF
    
    # enable the server
    systemctl enable --now nfs-server
    
    # allow access
    firewall-cmd --permanent --add-service=nfs

    Test NFS:

    mkdir /mnt/test && mount hostname_or_ip:/mnt/rhv_she /mnt/test
    date > /mnt/test/can_touch_this
    rm /mnt/test/*
    umount /mnt/test
    rmdir /mnt/test
  4. Using Cockpit, deploy RHV-M

    Follow the docs here.

    I assign 2 vCPUs and 4GiB RAM to the VM. It may complain. It'll be fine.

    Once ready, click the next button, it'll prepare and stage some things, including downloading the Self-Hosted Engine (SHE) VM template. Note that this is a few GiB in size, so it may take a while if your internet is slow.

    At some point, it will ask for the storage you want to use for SHE. Point it to the NFS export for rhv_she, e.g. 10.0.101.21:/mnt/rhv_she. The disk size should be pre-populated around 80GiB, I leave it at that value since the underlying LVM volume is thin provisioned anyway.

  5. (Maybe) Configure and update RHV-M

    Log in to RHV-M (https://hostname_or_ip/ovirt-engine/webadmin/) using the username and password (admin and whatever) configured during the install. Check the version and see if it's appropriate for what you are using (e.g. OCP IPI install testing). If it is, then this step is unnecessary since everything is temporary (how nihilistic).

    If you decide to update, then SSH to the RHV-M virtual machine and follow the docs.

    # From the hypervisor node, set maintenance mode
    hosted-engine --set-maintenance --mode=global
    
    # ssh to the RHV-M / SHE virtual machine
    ssh hostname_or_ip_of_hosted_engine
    
    # register and attach
    subscription-manager register
    subscription-manager attach --pool=blahblahblah
    
    # add the repos
    subscription-manager repos \
     --disable='*' \
     --enable=rhel-7-server-rpms \
     --enable=rhel-7-server-supplementary-rpms \
     --enable=rhel-7-server-rhv-4.3-manager-rpms \
     --enable=rhel-7-server-rhv-4-manager-tools-rpms \
     --enable=rhel-7-server-ansible-2-rpms \
     --enable=jb-eap-7.2-for-rhel-7-server-rpms
    
    # check for updates
    engine-upgrade-check
    
    # assuming it returns positive (otherwise, stop here)
    yum -y update ovirt\*setup\* rh\*vm-setup-plugins
    
    # run engine-setup to update the system, more or less, accept the defaults (no 
    # need to do backups of the databases) and let it do it's thing
    engine-setup
    
    # once done, update the reminaing OS packages
    yum -y update
    
    # if you're planning on updating the hypervisor, shutdown RHV-M
    shutdown -h now
    
    # if your not updating the hypervisor, reboot if a kernel update was applied
    #reboot

    And, finally, update the hypervisor.

    # make sure the RHV-M VM is down
    hosted-engine --vm-status
    
    # update packages in the normal way
    yum -y update
    
    # reboot
    reboot
    
    # when the host comes back up, reconnect via ssh or console
    # the below command will take a few minutes to actually work.  at first it will spit out
    # errors about how it can't connect to storage and to check a few services.  You can
    # view the logs for them, etc., but...for me...it usually takes about 5 minutes
    # before it responds correctly (with a VM down message)
    hosted-engine --vm-status
    
    # once it's responding, restart RHV-M
    hosted-engine --vm-start

    Give the RHV-M VM a minute or two to start up, then browse to the admin portal: https://hostname_or_ip/ovirt-engine/webadmin/.

    Since there is only one node in the cluster and no chance for RHV-M HA, there's no harm in leaving it perpetually in maintenance mode. If you feel the need, remove the SHE cluster from maintenance mode using the command hosted-engine --set-maintenance --mode=none from the hypervisor host.

  6. Configure the RHV environment

    At this point you should be logged into the RHV-M admin GUI interface and be greeted by the (mostly empty) dashboard. Your one host should be added to the default datacenter and you should have a storage domain (named whatever you specified during the install, hosted_storage by default).

    Let's finish configuring the RHV deployment. At a minimum, this will mean...

    • If needed, configure additional physical networks.

      If you need to configure additional physical adapters (standalone or bonds) for VM, storage, live migration, etc., now is the time to do so. Browse to Compute -> Hosts and click on the name of the host, then selet the "Network Interfaces" tab and, finally, the "Setup Host Networks" button in the upper right.

    • If needed, configure additional logical networks.

      A default ovirtmgmt network will have been created that is capable of placing VMs onto the same network as the management interface. If you need to add additional configuration (e.g. VLANs), browse to Network -> Networks and add them. Once the network(s) have been defined, browse to Compute -> Hosts, select the host (click the name to view details), and browse to the "Network Interfaces" tab. Click the "Setup Host Networks" button in the upper right to adjust the network config by drag+drop the logical network to the physical configuration. Once done, click ok to apply.

      Note that if you adjust the ovirtmgmt network, there may be some flakiness, so avoid adjusting it in conjunction with other changes.

    • Add the second storage domain.

      Browse to Storage -> Domains, click the button for "New Domain" in the upper right. Fill in the details for an NFS domain (assuming you followed my instructions above) at /mnt/rhv_data. Give it a creative and descriptive name like "rhv_data" so you know it's function!

    • Enable overcommit.

      By default RHV won't overcommit memory. To fix this, browse to Compute -> Cluster, highlight the cluster (Default, by default), and click the "Edit" button. Browse to the "Optimization" tab, then set "Memory Optimization" to your desired value. I also recommend enabling "Count threads as cores" and both "Enable memory balloon optimization" and "Enable KSM" (configured for "best KSM effectiveness") on this same tab.

    • Optionally, remove Spectre/Meltdown protection.

      You may want to remove the IBRS Spectre/Meltdown mitigations if you are willing to trade less security for more CPU performance. If so, browse to Compute -> Cluster, highlight the cluster (by default, Default), and click the "Edit" button in the upper right. On the general tab, for CPU type, choose the latest generation supported by your CPU which doesn't have IBRS SSBD (for Intel) or IBPB SSBD (for AMD) in it.

    • Verify there's no conflicts with MAC address ranges.

      If there is more than one standalone deployment on your network, verify that they aren't using the same MAC address ranges for virtual machines. Browse to Administration -> Configure, then coose the "MAC Address Pools" tab. Click on the default pool and press the "Edit" button in the top of the modal. Check the range against any other instances and adjust if needed.

Other

  • Uploading ISOs / templates can be done via the GUI, but you'll need to download the CA and trust it before it'll succeed. To download the CA bundle, browse to https://hostname_or_ip/ovirt-engine/ and select "CA Certificate", on the left side under "Downloads". Once downloaded, add it to your keychain and trust it as needed.

    To upload an ISO, browse to Storage -> Disks, then choose Upload -> Start in the upper right corner. Click "Test Connection" in the lower part of the ensuing modal to verify that it will work. Assuming the test passed, choose the ISO and the storage domain you want it to land in, then click OK.

  • Console access is, arguably, easier using noVNC vs SPICE with VirtViewer...and is definitely easier if the host is not directly accessible by the client. For each VM, after it's powered on, highlight the VM in the Compute -> Virtual Machines view, then select the dropdown for "Console" in the upper right and choose the "Console Options". Select the radio button for "VNC" at the top, then "noVNC" brlow. Click OK. When opening the console, it will now open in a new window/tab using the HTML5 noVNC client.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment