Host system is Ubuntu 16.04 LTS with 8 CPUs and 32GB Ram
Domain-name: *.example.com
Install virt-manager
sudo apt-get install virt-manager
Prepare 3 VMs (Master with 2 compute Nodes) Create a Virtual network (NAT) and assign static IPs to the hosts.
master.example.com | node1.example.com | node2.example.com |
---|---|---|
2 Cores | 2 Cores | 2 Cores |
8 GB Ram | 4 GB Ram | 4 GB Ram |
60 GB VirtIO Disk | 30 GB VirtIO Disk | 30 GB VirtIO Disk |
192.168.10.236 | 192.168.10.218 | 192.168.10.204 |
Boot and install CentOS Linux release 7.6.1810 (Core)
using the CentOS-7-x86_64-DVD-1810.iso
image.
Make sure you configure static IPs and enable NTP.
Add the IP addressed and the short form to your local /etc/hosts
192.168.10.236 master master.example.com
192.168.10.218 node1 node1.example.com
192.168.10.204 node2 node2.example.com
Add the FQDN to the /etc/hosts
on the master and all nodes. Also make sure that there is no 127.0.. entry with your hostname.
192.168.10.236 master.example.com
192.168.10.218 node1.example.com
192.168.10.204 node2.example.com
On all nodes create an installation user and grant sudo root privileges.
[root@master ~]# useradd origin
[root@master ~]# passwd origin
[root@master ~]# echo -e 'Defaults:origin !requiretty\norigin ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/openshift
[root@master ~]# chmod 440 /etc/sudoers.d/openshift
Install required packages on all nodes
[root@master ~]# yum -y install centos-release-openshift-origin311 epel-release docker git pyOpenSSL
[root@master ~]# systemctl start docker
[root@master ~]# systemctl enable docker
As Origin user on master node create a SSH keypair without passphrase.
[origin@master ~]$ ssh-keygen -q -N ""
Make sure the ~/.ssh/config
file looks as follows and has -rw-------.
permissions:
[origin@master ~]$ cat ~/.ssh/config
Host master
Hostname master.example.com
User origin
Host node1
Hostname node1.example.com
User origin
Host node2
Hostname node2.example.com
User origin
Copy public-keys to the other nodes
[origin@master ~]$ ssh-copy-id node1
[origin@master ~]$ ssh-copy-id node2
[origin@master ~]$ ssh-copy-id master
On the Master node install the openshift-ansible
package
[origin@master ~]$ sudo yum -y install openshift-ansible
Before installing the cluster using the
deploy_cluster.yml
playbook make sure that your local dns resolution works.
[root@master ~]# hostname
master.example.com
[root@master ~]# hostname -f
master.example.com
[root@master ~]# ping master.example.com
PING master.example.com (192.168.10.236) 56(84) bytes of data.
64 bytes from master.example.com (192.168.10.236): icmp_seq=1 ttl=64 time=0.294 ms
[root@master ~]# ping master
PING master.example.com (192.168.10.236) 56(84) bytes of data.
64 bytes from master.example.com (192.168.10.236): icmp_seq=1 ttl=64 time=0.294 ms
Note: hostname
and hostname -f
must return your FQDN.
Now configure the Ansible hosts file for the OKD advanced installation
[OSEv3:children]
masters
nodes
etcd
[masters]
master.example.com openshift_schedulable=true containerized=false
[etcd]
master.example.com
[nodes]
master.example.com openshift_node_group_name='node-config-master-infra'
node1.example.com openshift_node_group_name='node-config-compute'
node2.example.com openshift_node_group_name='node-config-compute'
[OSEv3:vars]
# General Variables
ansible_ssh_user=origin
ansible_become=true
openshift_disable_check=disk_availability,docker-storage,memory_availability
openshift_deployment_type=origin
openshift_release=v3.11
use_overlay2_driver=true
# Networking
os_firewall_use_firewalld=true
openshift_docker_insecure_registries=172.30.0.0/16
openshift_master_api_port=8443
openshift_master_console_port=8443
openshift_master_default_subdomain=ocp.example.com
# Cluster auth
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
Run the prerequisites.yml
ansible playbook.
[origin@master ~]$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml
Run the deploy_cluster.yml
ansible playbook.
[origin@master ~]$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
If you have troubles installing try using ansible 2.6.* (there were some issues with ansible 2.7)
Now make sure you add users to /etc/origin/master/htpasswd
. Generate user and password with:
htpasswd -bn admin secret
The cluster has been configred to use *.ocp.example.com as subdomain.
Virt-manager installed dnsmasq
on you host system. Configure dnsmasq to resolve the wildcard domain.
root@host # echo "address=/.ocp.example.com/192.168.10.236" > /etc/NetworkManager/dnsmasq.d/ocp-wildcard
Reboot your host system.
Add the following to your ansible file. This installs the metric system using ephemeral storage.
# Metrics https://docs.openshift.com/container-platform/3.11/install_config/cluster_metrics.html#metrics-ansible-variables
openshift_metrics_install_metrics=true
openshift_metrics_cassandra_storage_type=emptydir
openshift_metrics_heapster_requests_memory=300M
openshift_metrics_hawkular_requests_memory=750M
openshift_metrics_cassandra_requests_memory=750M
openshift_metrics_image_version=v3.11
Install the metric system
[origin@master ~]$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openshift-metrics/config.yml
this uses the metrics system without persistent storage. this may cause issues after a reboot
if the metric system (hawkular-metrics) crashes on startup fix it with rebuilding the hawkular-metrics-schema. See https://access.redhat.com/solutions/3645682
After fixing access hawkular-metrics.ocp.example.com to completely start the hawkular-metrics system.
To uninstall OKD run the uninstall.yml
playbook
[origin@master ~]$ /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml