Skip to content

Instantly share code, notes, and snippets.

@jcantrill
Created August 28, 2018 13:22
Show Gist options
  • Save jcantrill/eca85af2057b84642510cc086f1e5b97 to your computer and use it in GitHub Desktop.
Save jcantrill/eca85af2057b84642510cc086f1e5b97 to your computer and use it in GitHub Desktop.
Standing up Openshift using 'oc cluster up' and ansible
Overview
At the time of writing this document, 'oc cluster up --logging' or its 3.11 equivalent is broken. Following are instructions on using 'oc cluster up' followed by ansible to install logging. These instructions are generally valid for any Openshift release from 3.5 to 3.11.
Environment
These instructions are based on using:
Host: Centos 7 on libvirt
Mem: 8G
CPU: 4
Storage: 100Gb
oc binary: oc v3.10.0-alpha.0+418f69f-1341-dirty, kubernetes v1.10.0+b81c8f8
Setup
Become root
sudo bash
Start the openshift cluster giving it the hostname of your host. In v3.10 of 'oc' this will create subdirectory in the current one : /home/centos/openshift.local.clusterup
oc cluster up --public-hostname=192.168.122.45.nip.io
Note: ALWAYS start the cluster from the same directory in order to reuse the generated configs. Doing other wise will result in a non-function cluster. You can recover by deleting the entire directory and starting over.
Establish a link to the admin.kubeconfig that openshift-ansible expects:
ln -s /home/centos/openshift.local.clusterup/kube-apiserver/admin.kubeconfig /etc/origin/master/admin.kubeconfig
Note: Earlier versions updated the master config to set the logging URL for the webconsole. This method requires you to:
ln -s /var/lib/origin/openshift.local.config/master/master-config.yaml /etc/origin/master/master-config.yaml
Checkout ansible(https://github.com/ansible/ansible) from source
Checkout version
git checkout stable-2.4
Setup the environment
source hacking/env-setup
Checkout openshift-ansible(https://github.com/openshift/openshift-ansible)
Checkout version
git checkout release-3.10
Retrieve inventory file
wget https://raw.githubusercontent.com/jcantrill/jctoolbox/master/openshift/inventories/origin-310.inventory
From openshift-ansible root dir, install logging. Note: You most likely need to edit host and ip variables in the inventory to match the info defined for public-hostname. This will allow you to access kibana outside the vm. You will also need to edit ansible_ssh_private_key_file to use your private key.
ansible-playbook -i ../jctoolbox/openshift/inventories/origin-310.inventory playbooks/openshift-logging/config.yml -e openshift_logging_install_logging=true
Note: Sometimes the playbook fails on the last step trying to restart the openshift service. This failure can be ignored since we are not running Openshift as a service:
RUNNING HANDLER [openshift_logging : restart master api] **********************************************************************************************************************************************************
Tuesday 17 July 2018 10:04:19 -0400 (0:00:00.259) 0:02:20.440 **********
fatal: [openshiftdev.local]: FAILED! => {"failed": true, "msg": "The conditional check '(not (master_api_service_status_changed | default(false) | bool)) and openshift.master.cluster_method == 'native'' failed. The error was: error while evaluating conditional ((not (master_api_service_status_changed | default(false) | bool)) and openshift.master.cluster_method == 'native'): 'dict object' has no attribute 'master'\n\nThe error appears to have been in '/home/jeff.cantrill/git/openshift-ansible/roles/openshift_logging/handlers/main.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: restart master api\n ^ here\n"}
Check the pods are running:
oc get pods -n openshift-logging
NAME READY STATUS RESTARTS AGE
logging-curator-1-x5jbw 1/1 Running 0 1m
logging-es-data-master-cuyorw4k-1-8hjnf 2/2 Running 0 1m
logging-fluentd-5xwbl 1/1 Running 0 1m
logging-kibana-1-4cmgq 2/2 Running 0 2m
Create an admin user
oc login -u system:admin
oc adm policy add-cluster-role-to-user cluster-admin admin
Accessing Kibana
The kibana route is defined in the inventory file. In this example: https://kibana.192.168.122.45.nip.io This will route you to a login page, use 'admin' and any password to get admin access. Success looks like:
Controls of Interest
Time Picker: It is located in the upper right corner of the display and needs to be adjusted to the proper range in order to see log records (e.g. set to 'This week')
Index Pattern: Located on the left panel and restricts the searched indices (e.g. set to '.operations.*')
Filter Bar: Top bar to further restrict the query (e.g. set to '*') but could be something like: 'kubernetes.pod_name:foo'
Known Issues
Networking
Sometimes I experience issues with networking where pods are no longer able to communicate with the api server or other various non-obvious flakes. Generally this seems to occur when I have suspended the VM and moved locations. Typically this is resolvable by flushing iptables and restarting docker. Note: This will kill your running cluster
iptables --flush && iptables -tnat --flush
systemctl restart docker
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment