Skip to content

Instantly share code, notes, and snippets.

@tosin2013
Last active February 18, 2020 20:54
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save tosin2013/479acd3ca676aec6f42514f7df2f8921 to your computer and use it in GitHub Desktop.
Save tosin2013/479acd3ca676aec6f42514f7df2f8921 to your computer and use it in GitHub Desktop.
Qubinode OpenShift 4 Deployment

Qubinode OpenShift 4 Deployment on KVM

These steps define deploying a single node openshift 4.2 cluster using idm as the dns server on kvm. For the network it will use nat and bridge. OpenShift nodes are deployed within the nat work and the idm server is deployed on the bridge so external machines can resolve the addresses of the openshift cluster.

KVM Infrastructure configuration

Download Repo Extract it and run qubinode installer

wget https://github.com/tosin2013/qubinode-installer/archive/master.zip
unzip master.zip
mv qubinode-installer-master qubinode-installer
rm -f master.zip
cd qubinode-installer
./qubinode-installer

install KVM packages and configure bridge and nat networks
Choose Option 4

To troublehoshoot
Review variables under /home/admin/qubinode-installer/playbooks/vars/all.yml and restart option 4

Show network Settings

$ sudo virsh net-list
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 default              active     yes           yes
 ocp42                active     yes           yes
 qubinet              active     yes           yes

copy rhel server image for idm installation
copy rhel-server-7.6-x86_64-kvm.qcow2 to /home/$HOME/qubinode-installer directory

install idm

./qubinode-installer -p idm

Configure OpenShift settings
copy contents of https://github.com/tosin2013/qubinode-installer/blob/ocp4/samples/ocp4.yml to samples/all.yml update the following

Populate pull secret and ssh key in samples/all.yml
found https://cloud.openshift.com/clusters/install image_pull_secret:

Select bare metal key

$ cat ~/.ssh/id_rsa.pub #contents or create a new key

ssh_ocp4_public_key:

**make sure kvm_host_ip is poulated or deployment will fail **

kvm_host_ip: yout host ip

run the following playbooks

ansible-playbook playbooks/ocp4_01_deployer_node_setup.yml
ansible-playbook playbooks/ocp4_02_configure_dns_entries.yml

to test run dig against one of the named servers

Its recommened to test dig against different fqdns or the ocp4 deployment will fail

Configure Load balancer

ansible-playbook playbooks/ocp4_03_configure_lb.yml

Test that container is running

$  sudo podman ps

will add firewall rules to ocp4_01_deployer_node_setup.yml playbook

sudo firewall-cmd --add-port={80/tcp,8080/tcp,443/tcp,6443/tcp,22623/tcp,32700/tcp} --permanent
sudo firewall-cmd --reload

Download OpenShift cli and installer

ansible-playbook playbooks/ocp4_04_download_openshift_artifacts.yml

Create OpenShift ignition files for bootstrap master and worker nodes

ansible-playbook playbooks/ocp4_05_create_ignition_configs.yml

Deploy Webserver Coreos nodes will use to download ign files and coreos images

ansible-playbook playbooks/ocp4_06_deploy_webserver.yml 

Test that webserver container is up

sudo podman ps

create all coreos vms

bash -xe lib/qubinode_deploy_ocp4_nodes.sh

bootup all coreos nodes

ansible-playbook playbooks/ocp4_08_startup_coreos_nodes.yml

if you need to remove nodes bash -x lib/qubinode_cleanup_coreos_vms.sh

OpenShift installation

Test communication with bootstrap node or other vm default ip address for bootstrap node is below

# ssh -i ~/.ssh/id_rsa -o "StrictHostKeyChecking=no"  core@192.168.50.2

Track installation status of boostrap Note: master nodes will restart and start to configure openshift it will be a wait....

$ cd ~/qubinode-installer 
$  openshift-install --dir=ocp4 wait-for bootstrap-complete --log-level debug
DEBUG OpenShift Installer v4.2.1                   
DEBUG Built from commit e349157f325dba2d06666987603da39965be5319 
INFO Waiting up to 30m0s for the Kubernetes API at https://api.ocp42.qubinodedemo.com:6443... 
INFO API v1.14.6+868bc38 up                       
INFO Waiting up to 30m0s for bootstrapping to complete... 
DEBUG Bootstrap status: complete                   
INFO It is now safe to remove the bootstrap resources 

Shutdown bootstrap

$ sudo virsh shutdown bootstrap

Check openshift enviornment and monitor clusteroperator status

$ export KUBECONFIG=/home/admin/qubinode-installer/ocp4/auth/kubeconfig 
$ oc whoami
$ oc get nodes
$ oc get csr
$ watch -n5 oc get clusteroperators

Configure registry to use empty directory
[registry- configuring-storage-baremetal_installing-bare-metal](https://docs.openshift.com/container-platform/4.2/installing/installing_bare_metal/installing-bare-metal.html#registry- configuring-storage-baremetal_installing-bare-metal)

$ oc get pod -n openshift-image-registry
$ oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}'
$ oc get pod -n openshift-image-registry
$ watch -n5 oc get clusteroperators

**Check that OpenShift installation is complete

$ cd ~/qubinode-installer 
$ openshift-install --dir=ocp4 wait-for install-complete
INFO Waiting up to 30m0s for the cluster at https://api.ocp42.example.com:6443 to initialize... 
INFO Waiting up to 10m0s for the openshift-console route to be created... 
INFO Install complete!                            
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/admin/qubinode-installer/ocp4/auth/kubeconfig' 
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.ocp42.example.com 

Post Steps

Update DNS Server to get to OpenShift Cluster

  • Option 1: Add idm server ip to your home router so all machines can have access to the console url.
  • Option 2: Add idm DNS Server to /etc/resolv.conf on your laptop so your laptop may have access to the openshift cluster.

Optional Install Cockpit
In order to manage and view cluster from a web ui

subscription-manager repos --enable=rhel-7-server-extras-rpms
subscription-manager repos --enable=rhel-7-server-optional-rpms
sudo yum install  cockpit cockpit-networkmanager cockpit-dashboard \
  cockpit-storaged cockpit-packagekit cockpit-machines cockpit-sosreport \
  cockpit-pcp cockpit-bridge -y
sudo systemctl start cockpit
sudo systemctl enable cockpit.socket
sudo firewall-cmd --add-service=cockpit
sudo firewall-cmd --add-service=cockpit --permanent
sudo firewall-cmd --reload

go to your servers url for cockpit ui

https://SERVER_IP:9090
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment