Skip to content

Instantly share code, notes, and snippets.

@hodrigohamalho
Last active April 26, 2016 01:06
Show Gist options
  • Save hodrigohamalho/84ba67b11f74cb67a030 to your computer and use it in GitHub Desktop.
Save hodrigohamalho/84ba67b11f74cb67a030 to your computer and use it in GitHub Desktop.
Openshift On RHEL Guide

Installation Rhel 7.1

Parallels Tools (Mac OS Only)

mkdir /media/cdrom
mount -o exec /dev/sr0 /media/cdrom
cd /media/cdrom
./install

Dnsmasq (Mac os)

brew install dnsmasq
cp /usr/local/opt/dnsmasq/dnsmasq.conf.example /usr/local/etc/dnsmasq.conf
sudo cp -fv /usr/local/opt/dnsmasq/*.plist /Library/LaunchDaemons
sudo launchctl load /Library/LaunchDaemons/homebrew.mxcl.dnsmasq.plist
echo "address=/cloud.devops.org/10.211.55.10" >> /usr/local/etc/dnsmasq.conf
sudo launchctl stop homebrew.mxcl.dnsmasq
sudo launchctl start homebrew.mxcl.dnsmasq
sudo mkdir /etc/resolver
sudo touch /etc/resolver/cloud.devops.org
sudo echo "nameserver 127.0.0.1" > /etc/resolver/cloud.devops.org
Fonte: http://passingcuriosity.com/2013/dnsmasq-dev-osx/

Registering

subscription-manager register
subscription-manager list --available
subscription-manager attach --pool=<pool id>

Repos

First disable all

subscription-manager repos --disable "*"

Enable all desired

subscription-manager repos --enable rhel-7-server-ose-3.1-rpms --enable rhel-7-server-rpms --enable rhel-7-server-extras-rpms

Dependencies

yum -y update
yum -y install wget git net-tools bind-utils iptables-services bridge-utils bash-completion docker python-virtualenv gcc vim
systemctl enable docker
systemctl start docker
systemctl disable NetworkManager
systemctl stop NetworkManager
yum remove NetworkManager -y
Note
the package 'bash-completion' is needed to enable oc/oadm command completion in your shell. To test try 'oc <TAB> <TAB>' to see the completion list.

Master

Add a second hard disk to VM. After that it should be on /dev/sda or /dev/sdb. Use fdisk -l to discover.

Create new partition

[root@master]# fdisk /dev/sdb
Command (m for help): n
Select (default p): p
Partition number (1-4, default 1):
Using default value 1
First sector (2048-41943039, default 2048):
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039):
Using default value 41943039
Command (m for help): w

Create new partition Volume Group

pvcreate /dev/sdb1
vgextend rhel /dev/sdb1
lvs

Tell to docker use it

docker-storage-setup

Hostname

hostnamectl set-hostname master.devops.org

DNS

Set this configuration on all machines

vim /etc/hosts
10.211.55.10	master.devops.org	master
10.211.55.11	node01.devops.org	node01
10.211.55.12	node02.devops.org	node02

DNSMasq setup (Can’t be on master!)

In this training repository is a sample dnsmasq.conf file and a sample hosts file. If you do not have the ability to manipulate DNS in your environment, or just want a quick and dirty way to set up DNS, you can install dnsmasq on one of your nodes. Do not install DNSMasq on your master. OpenShift now has an internal DNS service provided by Go’s "SkyDNS" that is used for internal service communication.

yum -y install dnsmasq

Replace /etc/dnsmasq.conf with the one from this repository, and replace /etc/hosts with the hosts file from this repository.

Copy your current /etc/resolv.conf to a new file such as /etc/resolv.conf.upstream. Ensure you only have an upstream resolver there (eg: Google DNS @ 8.8.8.8), not the address of your dnsmasq server.

Enable and start the dnsmasq service:

systemctl enable dnsmasq; systemctl start dnsmasq

You will need to ensure the following, or fix the following:

  1. Your IP addresses match the entries in /etc/hosts

  2. Your hostnames for your machines match the entries in /etc/hosts

  3. Your cloudapps domain points to the correct node ip in dnsmasq.conf

  4. Each of your systems has the same /etc/hosts file

  5. Your master and nodes /etc/resolv.conf points to the IP address of the node running DNSMasq as the first nameserver

  6. Your dnsmasq instance uses the resolv-file option to point to /etc/resolv.conf.upstream only.

  7. That you also open port 53 (TCP and UDP) to allow DNS queries to hit the node vim /etc/sysconfig/iptables -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 53 -j ACCEPT -A OS_FIREWALL_ALLOW -p udp -m state --state NEW -m udp --dport 53 -j ACCEPT

  8. Following this setup for dnsmasq will ensure that your wildcard domain works, that your hosts in the example.com domain resolve, that any other DNS requests resolve via your configured local/remote nameservers, and that DNS resolution works inside of all of your containers. Don’t forget to start and enable the dnsmasq service.

Verifying DNSMasq

You can query the local DNS on the master using dig (provided by the bind-utils package) to make sure it returns the correct records:

dig ose3-master.example.com
...
;; ANSWER SECTION:
ose3-master.example.com. 0  IN  A 192.168.133.2
...

The returned IP should be the public interface’s IP on the master. Repeat for your nodes. To verify the wildcard entry, simply dig an arbitrary domain in the wildcard space:

dig foo.cloudapps.example.com
...
;; ANSWER SECTION:
foo.cloudapps.example.com 0 IN A 192.168.133.2
...

DNS Bind (Can’t be on master)

yum install bind*
Copy the content bellow to /etc/named.conf
 options {
listen-on port 53 { any; };
listen-on-v6 port 53 { any; };
directory 	"/var/named";
dump-file 	"/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
allow-query     { any; };
recursion yes;
dnssec-enable yes;
dnssec-validation yes;
dnssec-lookaside auto;
/* Path to ISC DLV key */
bindkeys-file "/etc/named.iscdlv.key";
managed-keys-directory "/var/named/dynamic";
pid-file "/run/named/named.pid";
session-keyfile "/run/named/session.key";
 };
 logging {
         channel default_debug {
                 file "data/named.run";
                 severity dynamic;
         };
 };
 zone "devops.org" IN {
 	type master;
 	file "dynamic/devops.org.db";
 };
 zone "cloud.devops.org" IN {
 	type master;
 	file "dynamic/cloud.devops.org.db";
 };
 include "/etc/named.rfc1912.zones";
 include "/etc/named.root.key";
systemctl enable named
iptables -I OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 53 -j ACCEPT
iptables -I OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 53 -j ACCEPT

After this, add this line on /etc/sysconfig/iptables to persist.

-A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 53 -j ACCEPT
-A OS_FIREWALL_ALLOW -p udp -m state --state NEW -m udp --dport 53 -j ACCEPT

On file: /var/named/dynamic/devops.org.db

$ORIGIN .
$TTL 1 ; 1 seconds (for testing only)
devops.org       IN SOA  ns1.devops.org. hostmaster.devops.org. (
            2011112904 ; serial
            60         ; refresh (1 minute)
            15         ; retry (15 seconds)
            1800       ; expire (30 minutes)
            10         ; minimum (10 seconds)
            )
        NS  ns1.devops.org.
        MX  10 mail.devops.org.
$ORIGIN devops.org.
ns1      A   10.211.55.11
master   A   10.211.55.10
node01   A   10.211.55.11
node02   A   10.211.55.12
desktop  A   192.168.0.14

On file: /var/named/dynamic/cloud.devops.org.db

$ORIGIN .
$TTL 1 ; 1 seconds (for testing only)
cloud.devops.org       IN SOA  ns1.cloud.devops.org. hostmaster.cloud.devops.org. (
            2011112904 ; serial
            60         ; refresh (1 minute)
            15         ; retry (15 seconds)
            1800       ; expire (30 minutes)
            10         ; minimum (10 seconds)
            )
        NS  ns1.cloud.devops.org.
        MX  10 mail.cloud.devops.org.
$ORIGIN cloud.devops.org.
*           A   10.211.55.10

10.211.55.10 is the host where the router is running.

Restart service

systemctl restart named

SSH keys

From machine master exec:
[root@master ~]# ssh-keygen
[root@master ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub master
[root@master ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub node01
[root@master ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub node02

Openshift Install

After that, try to connect (ssh) to all nodes and master itself. No password should be asked.

[root@master ~]# yum install python-virtualenv gcc -y
[root@master ~]# mkdir /tmp/ose;cd /tmp/ose
[root@master ~]# curl -o oo-install-ose.tgz https://install.openshift.com/portable/oo-install-ose.tgz
[root@master ~]# tar -zxf oo-install-ose.tgz
[root@master ~]# ./oo-install-ose

User add

useradd ramalho
useradd silva

New project

oadm new-project dsv --display-name='Desenvolvimento' --description='Ambiente responsável por suportar as aplicações de desenvolvimento' --admin=ramalho
oadm new-project hmg --display-name='Homologação' --description='Ambiente responsável por suportar as aplicações de Homologação' --admin=admin
oadm new-project cidi --display-name='Integração' --description='Entrega e integração contínua' --admin=ramalho

Regions and Zones

oc label --overwrite node master.devops.org region="infra" zone="default"
oc label --overwrite node node02.devops.org region="hmg" zone="primary"
oc label --overwrite node node01.devops.org region="dsv" zone="primary"

Edit default node selector

vim /etc/openshift/master/master-config.yaml
defaultNodeSelector: ""
systemctl restart openshift-master

Relax some permissions

oc edit scc

Output:

NAME         PRIV      CAPS      HOSTDIR   SELINUX    RUNASUSER
privileged   true      []        true      RunAsAny   RunAsAny
restricted   false     []        false     RunAsAny   RunAsAny

Jenkins (Redhat registry)

oc new-app jenkins-1-rhel7 -l -e "JENKINS_PASSWORD=redhat" -l "region=cidi,app=jenkins-redhat"
oc volume dc/jenkins-1-rhel7 --add --overwrite -t persistentVolumeClaim \
--claim-name=claim-jenkins-redhat --name=jenkins-1-rhel7-volume-1
oc expose service jenkins-1-rhel7

http://jenkins-1-rhel7-cidi.cloud.devops.org/configure * Manage Jenkins > Configure System - ADD JDK

  • ADD MAVEN

Jenkins Community

oc new-app docker.io/jenkins -l "region=cidi,app=jenkins"
oc volume dc/jenkins --add --overwrite -t persistentVolumeClaim \
--claim-name=claim-jenkins --name=jenkins-volume-1
oc expose service jenkins

Nexus

oc new-app docker.io/sonatype/nexus \
  -l 'region=cidi,app=nexus'
oc volume dc/nexus --add --overwrite -t persistentVolumeClaim \
  --claim-name=claim-nexus --name=nexus-volume-1

Local settings.xml

[nexus/settings.xml]

All developers should use this same config on local machines.

Gitlab

oc new-app gitlab/gitlab-ce:8.0.3-ce.1 -l 'region=cidi,app=gitlab-ce'
oc volume dc/gitlab-ce --add --overwrite -t persistentVolumeClaim \
--claim-name=gitlab-claim-etc --name=gitlab-ce-volume-1
oc volume dc/gitlab-ce --add --overwrite -t persistentVolumeClaim \
--claim-name=gitlab-claim-log --name=gitlab-ce-volume-2
oc volume dc/gitlab-ce --add --overwrite -t persistentVolumeClaim \
--claim-name=gitlab-claim-opt --name=gitlab-ce-volume-3
oc expose service gitlab-ce
oc exec -it gitlab-ce-4-hr71q /bin/sh
vim /var/export/gitlab-etc/gitlab.rb
external_url 'http://gitlab-ce-cidi.cloud.devops.org'
gitlab-ctl reconfigure

Github Integration

https://github.com/settings/applications/new
oc exec -it gitlab-ce-4-hr71q /bin/sh
vim /var/export/gitlab-etc/gitlab.rb
gitlab_rails['omniauth_providers'] = [
  {
    "name" => "github",
    "app_id" => "...",
    "app_secret" => "....",
    "args" => { "scope" => "user:email" }
  }
]
gitlab-ctl reconfigure

Solving 502 error

sudo edit svc gitlab-ce
Remove 22 port
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment