Single Node OpenShift (SNO) as of 4.9 is a Dev Preview and adding an additional worker is out of scope for now.
The notes listed here are inspirational and totally unsupported. The PoC was heavily inspiried by https://medium.com/codex/openshift-single-node-for-distributed-clouds-582f84022bd0
If interested, please follow, vote and/or contribute to OCPPLAN-6839 Single replica control plane topology expansion.
Save resources on the control plane side. The control node shall run all the infra components as well. Have the worker completely dedicated to SAP HANA on CNV validation.
- 2 bare metal hosts
- jumpbox
- external DNS server
- static IP configuration of hosts
- http server (in this example the same as jumpbox)
- serves ignition files and ISOs
- no loadbalancer
- no PXE nor DHCP server
- no AI (Assisted Installer) utilized (not recommended)
See the required DNS records: https://docs.openshift.com/container-platform/4.9/installing/installing_sno/install-sno-preparing-to-install-sno.html#requirements-for-installing-on-a-single-node_install-sno-preparing
In our case, all of them point to the control host (SNO). The following is the snippet from dnsmasq configuration:
host-record=ctrl.snoplusone.ocp.vslen,10.123.223.23
host-record=worker.snoplusone.ocp.vslen,10.123.223.36
# just for the convenience
cname=ctrl,ctrl.snoplusone.ocp.vslen
cname=worker,worker.snoplusone.ocp.vslen
host-record=snoplusone.ocp.vslen,10.123.223.23
host-record=api.snoplusone.ocp.vslen,10.123.223.23
host-record=api-int.snoplusone.ocp.vslen,10.123.223.23
address=/apps.snoplusone.ocp.vslen/10.123.223.23
Note the IP address of the DNS server (in our case 10.1.2.232).
Take ./eno1.nmconnection as an example. Prepare the same iface fail both for control and worker node.
For an unknown reason, the network configuration did not work for worker node, thus only one connection files was needed.
Note you interface name may differ. You may want to initially boot to the live ISO just to see the interface name of the interface you want to configure.
-
Download RHCOS 4.9 live ISO from https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.9/latest/
-
Download openshift-install, oc clients and coreos-installer as described at https://medium.com/codex/openshift-single-node-for-distributed-clouds-582f84022bd0#8ab9
- in addition to that, make sure that "jq >= 1.6" and "moreutils" are installed and available on your PATH in jumpbox
-
Download ./install-config.yaml and update it for the pullSecret and sshKey. Set the desired domain and cluster names. See more at Required configuration parameters. Get your own pullSecret from Red Hat Hybrid Cloud Console.
-
Download ./deploy.sh script and edit the variables at the beginning (ISO location, connection file names, etc.).
-
Run it.
-
Logon to the BMC of control host, mount the 4.9-ctrl.iso (from your http location) and boot the CD/DVD virtual media.
-
Wait for installation to complete. See Booting the SNO from ISO for more information.
-
Make sure the ingress controller stays on tho control node before a worker is added:
oc patch ingresscontroller/default -n openshift-ingress-operator --type merge -p '{"spec":{ "nodePlacement": { "nodeSelector": { "matchLabels": { "node-role.kubernetes.io/master": "" }}}}}'
Otherwise, you will need to setup a load balancer.
Once the SNO is up and running, you can start provisioning the worker.
The worker.ign has been generated earlier by the deploy.sh script. Booting the ISO with the embedded ignition should work, but it did not in our case. Therefore, in this example, we will boot into the live ISO image and kick off the installation from there manually.
-
Logon to the BMC of the worker host, mount the 4.9-worker.iso (from your http location) and boot the CD/DVD virtual media.
-
Wait until you end up in shell.
-
Configure the network interface, for example:
id='Wired connection 1' iface=eno24 newId="$iface" sudo nmcli c down "$id" sudo nmcli c modify "$id" ipv4.addresses "10.123.223.36/22" \ ipv4.gateway 10.97.228.1 ipv4.DNS 10.1.2.232 \ ipv4.method manual connection.id "$newId" sudo nmcli c up "$newId"
-
Kick off the installation:
sudo coreos-installer install --copy-network --ignition-url \ http://10.1.2.232:8080/rhcos/snoplusone/ignition/worker.ign \ --insecure --insecure-ignition /dev/sda
-
Once done, reboot and unmount the ISO.
-
The worker will generate CSR for joining the SNO cluster that needs to be approved manually. You can wait for it and approve automatically like this:
while true; do oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | \ xargs -r oc adm certificate approve sleep 10; done
-
Profit!