Skip to content

Instantly share code, notes, and snippets.

@melvz
Forked from bzon/INSTALL_OPENSHIFT_CLOUD.md
Last active April 24, 2018 05:20
Show Gist options
  • Save melvz/df19999c64f2c7fd817973b7a89042a7 to your computer and use it in GitHub Desktop.
Save melvz/df19999c64f2c7fd817973b7a89042a7 to your computer and use it in GitHub Desktop.
OpenShift Origins on any Cloud Provider

Credits

Credits go to Bryan Sazon (john.bryan.j.sazon@accenture.com) for the original content of his Gist.

I also used reference from https://www.projectatomic.io/blog/2017/05/oo-standalone-registry/


So a great number of DevOps folks will struggle to try out OpenShift. Many will be overwhelmed if they should try multi-node first, or single node first, whatever.

This fork doc is meant to provide options. I highlight the first option below -- to provide a SINGLE NODE installation first, before you try multiple nodes.

But still, OpenShift Origins is a great way to host your apps, esp if you need a quick and dirty DevOps environment. It musters the power of the Kube engine and maximizes the felxibility of Docker containers, making it easy for you to start containerizing your apps.


Prepare your VMs using a cloud provider.

System Requirements for VM provision:

  • RHEL or Centos7 , 2 CPU, 8 GB RAM minimum
  • Attach a separate Disk Volume with 70 GB space
  • Root Disk volume should be 50GB and up

PRE-WORK

Once VMs are provisioned, you need to run a bash script (on each machine) with the following content:

## Install misc tools
yum clean all
rm -Rf /var/cache/yum
yum clean all
yum -y install epel-release
yum -y install git
yum -y install ansible
yum -y --enablerepo=epel install python-pip python-devel python


## Install NetworkManager (not part of the documentation)
## For fixing ansible playbook errors about this package not existing
yum -y install NetworkManager
systemctl enable NetworkManager
systemctl start NetworkManager


## Install docker
yum  --enablerepo="epel" install -y  docker
sed -i '/OPTIONS=.*/c\OPTIONS="--selinux-enabled --log-opt max-size=1M --log-opt max-file=3"' /etc/sysconfig/docker
yum install -y docker-compose

## Configure docker-storage-setup.
## Make sure you have an extra disk, visible via lsblk command.
lvmconf --disable-cluster
cat <<EOF > /etc/sysconfig/docker-storage-setup
DEVS=/dev/xvdb     
VG=docker-vg
WIPE_SIGNATURES=true
EOF

## Setup docker storage 
rm -fr /var/lib/docker
docker-storage-setup

## Start docker
systemctl enable docker && \
systemctl start docker

Above bash script will install NetworkManager, the Epel repo,git and Ansible runtime.
It will also configure your OS block storage to be ready for docker storage configuration.

Next, you need to checkout the right GitHub branch of openshift-ansible, specifically release 3.6.24-1.

$ git clone https://github.com/openshift/openshift-ansible.git
$ cd openshift-ansible
$ git checkout openshift-ansible-3.4.24-1
HEAD is now at 90f6a70... Automatic commit of package [openshift-ansible] release [3.6.24-1].

~

OPTION A: Openshift Origin -- Single Node ALL IN ONE (sample inventory file using Ansible)

For any type of cloud provider

# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_user=root
ansible_become=true

openshift_master_default_subdomain=35.198.131.7.xip.io
openshift_deployment_type=origin
openshift_release=v3.6.0
deployment_subtype=registry
containerized=true

# disable strict production setup check
openshift_disable_check=docker_storage,memory_availability,disk_availability

# enable htpasswd authopenshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]

#openshift_master_htpasswd_users={'admin': '$apr1$zgSjCrLt$1KSuj66CggeWSv.D.BXOA1'}

# host group for masters
[masters]
35.198.131.7

[etcd]
35.198.131.7

# host group for worker nodes, we list master node here so that# openshift-sdn gets installed. We mark the master node as not schedulable.

[nodes]
35.198.131.7 openshift_node_labels="{'region': 'infra', 'zone': 'default'}" openshift_schedulable=true
#


OPTION B: Openshift Origin -- Single Master, Multi Node (sample inventory file using Ansible)

References:

For GCP (Google Cloud Platform)


# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_user=root
ansible_become=true

openshift_master_default_subdomain=35.198.97.0.xip.io

openshift_deployment_type=origin
openshift_release=v3.6.0
deployment_subtype=registry
containerized=true

# disable strict production setup check
openshift_disable_check=docker_storage,memory_availability

# enable htpasswd auth
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/openshift-passwd'}]
#openshift_master_htpasswd_users={'admin': '$apr1$zgSjCrLt$1KSuj66CggeWSv.D.BXOA1'}

# host group for masters
[masters]
35.198.97.0 openshift_schedulable=true

[etcd]
35.198.97.0

# host group for worker nodes

[nodes]
35.198.97.0 openshift_node_labels="{'region': 'infra'}"
35.198.170.189 openshift_node_labels="{'region': 'apps', 'zone': 'default'}"
#

For AWS EC2

# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
# SSH user, this user should allow ssh based auth without requiring a password
ansible_ssh_user=centos
deployment_type=origin

# Master node time sync
openshift_clock_enabled=true

# If ansible_ssh_user is not root, ansible_sudo must be set to true
ansible_become=true

# enable htpasswd authentication
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/openshift-passwd'}]

# This is the public ip of the master instance with private ip ip-172-31-27-247.ap-southeast-1.compute.internal
openshift_master_default_subdomain=ose.EC2PUBLICIP.xip.io

# default project node selector
osm_default_node_selector='region=apps'

# default selectors for router and registry services
openshift_router_selector='region=infra'
openshift_registry_selector='region=infra'

# disable strict production setup check
openshift_disable_check=docker_storage,memory_availability

# so builds can push successfully to the insecure registry using its default cidr block
openshift_docker_insecure_registries=172.30.0.0/16

#DO NOT Configure dnsIP in the node config.  Causes issues with the AWS resolver.  The containers cannot resolve external hostnames.
#openshift_dns_ip=172.30.0.1

# host group for masters
[masters]
ip-172-31-27-247.ap-southeast-1.compute.internal openshift_schedulable=true

[etcd]
ip-172-31-27-247.ap-southeast-1.compute.internal

# host group for nodes, includes region info
[nodes]
ip-172-31-27-247.ap-southeast-1.compute.internal openshift_node_labels="{'region': 'infra'}"
ip-172-31-22-154.ap-southeast-1.compute.internal openshift_node_labels="{'region': 'apps', 'zone': 'default'}"

For Azure VM

# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
# SSH user, this user should allow ssh based auth without requiring a password
ansible_ssh_user=john.bryan.j.sazon
deployment_type=origin

# Master node time sync
openshift_clock_enabled=true

# If ansible_ssh_user is not root, ansible_sudo must be set to true
ansible_become=true

# enable htpasswd authentication
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/openshift-passwd'}]

openshift_master_default_subdomain=origin.55.41.148.233.xip.io

# default project node selector
osm_default_node_selector='region=apps'

# default selectors for router and registry services
openshift_router_selector='region=infra'
openshift_registry_selector='region=infra'

# disable strict production setup check
openshift_disable_check=docker_storage,memory_availability,disk_availability

# so builds can push successfully to the insecure registry using its default cidr block
openshift_docker_insecure_registries=172.30.0.0/16

# Configure dnsIP in the node config
openshift_dns_ip=172.30.0.1

# Enable unsupported configurations, things that will yield a partially
# functioning cluster but would not be supported for production use
openshift_enable_unsupported_configurations=false

# openshift-ansible will wait indefinitely for your input when it detects that the
# value of openshift_hostname resolves to an IP address not bound to any local
# interfaces. This mis-configuration is problematic for any pod leveraging host
# networking and liveness or readiness probes.
# Setting this variable to true will override that check.
#openshift_override_hostname_check=true
openshift_override_hostname_check=true

openshift_master_cluster_hostname=public-master.eastus.cloudapp.azure.com 
openshift_master_cluster_public_hostname=public-master.eastus.cloudapp.azure.com 
openshift_master_cluster_public_vip=55.41.148.233

[masters]
55.41.148.233 openshift_schedulable=true

[etcd]
55.41.148.233

[nodes]
55.41.148.233 openshift_node_labels="{'region': 'infra'}" openshift_hostname=openshift-master01
56.114.47.192 openshift_node_labels="{'region': 'apps', 'zone': 'default'}" openshift_hostname=openshift-worker01

EXECUTE the INSTALLATION

ansible-playbook openshift-ansible/playbooks/byo/openshift_facts.yml --private-key <YOUR_SSH_KEY_PAIR> -i openshift-hosts.txt
ansible-playbook openshift-ansible/playbooks/byo/config.yml --private-key <YOUR_SSH_KEY_PAIR> -i openshift-hosts.txt

===========================================

Running with PRIVATE KEY:

  • Assuming your EC2 VMs can ping each other, you can RUN the Ansible installer in the MASTER node to deploy components to the other NODE

  • You do NOT need to specify PRIVATEKEY if the VM can do passwordless SSH to the other VM.

===========================================


Post Installation

  • Create the initial user
htpasswd -b /etc/origin/openshift-passwd admin admin
oadm policy add-role-to-user cluster-admin admin


----------------------------------


---
# Testing

- Ensure that the subdomain is accessible http://ose.EC2PUBLICIP.xip.io
- Creating a sample app to test a push to Docker Registry


# Use systemctl to check the MASTER and NODE services of ORIGINs

   -  MASTER service is represented by **_origin-master.service_**,  not _atomic-openshift-master_.

   -  NODE service is represented by **_origin-node.service_**, not _atomic-openshift-node_.

~


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment