Skip to content

Instantly share code, notes, and snippets.

@guilleiguaran
Created November 13, 2012 05:49
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save guilleiguaran/4064205 to your computer and use it in GitHub Desktop.
Save guilleiguaran/4064205 to your computer and use it in GitHub Desktop.

OpenShift Install in EC2

Before of start

  • Use official CentOS 6 AMI, for details check the wiki

  • Enable SELinux, Check this tutorial of Red Hat to do it.

  • Create a security group for both Broker and Node with the ports 22, 80, 443 open

  • The ports for DNS (udp/53) and MCollective (tcp/61613) should be accessible in broker for nodes

  • The ports 35531-65535 should be open in Node for port forwarding

  • Enable access via SSH to root user in nodes, this is needed in order to copy broker public key to node. This can be done in /etc/ssh/sshd_config

Base System

These steps should be done in all machines (brokers and nodes)

Update system

Before of start is recommendable install lastest updates for system

yum update

Setting up Time Synchronisation

Install and enable NTPD service to keep time synced between broker and nodes

yum install ntpd
ntpdate clock.redhat.com
chkconfig ntpd on
service ntpd start

Setup Remote Administration on servers

Create as root the .ssh folder, create the authorized_keys on it and paste your public ssh key on it

Setting Broker with Related Components

Setup Repositories

Create /etc/yum.repos.d/openshift-infrastructure.repo with this:

[openshift_infrastructure]
name=OpenShift Infrastructure
baseurl=https://mirror.openshift.com/pub/origin-server/nightly/enterprise/2012-11-15/Infrastructure/x86_64/os/
enabled=1
gpgcheck=0

And then run yum update to update packages

Setting up BIND / DNS

Install BIND package:

yum install bind bind-utils

We will need to refer multiple times to our domain example.com, lets save it in an env variable:

domain=example.com

Also lets create another env to contain the filename for a new DNSSEC key for our domain

keyfile=/var/named/${domain}.key

Then we will use dnssec-keygen to generate a new DNSSEC key for the domain deleting any old keys:

rm -vf /var/named/K${domain}*
pushd /var/named
dnssec-keygen -a HMAC-MD5 -b 512 -n USER -r /dev/urandom ${domain}
KEY="$(grep Key: K${domain}*.private | cut -d ' ' -f 2)"
popd

Now we must ensure we have a key for the broker to communicate with BIND. We will use rndc-confgen that will generate configuration files for rndc, the tool that we will use for communication.

rndc-confgen -a -r /dev/urandom

We must ensure that the ownership, permissions, and SELinux context are appropiate.

restorecon -v /etc/rndc.* /etc/named.*
chown -v root:named /etc/rndc.key
chmod -v 640 /etc/rndc.key

Since we are setting a local BIND instance we should setup forwarders for resolving internet hosts. Create /var/named/forwarders.conf with this:

forwarders { 10.0.0.2; } ;

Ensure that the new created files have permissions and SELinux context set correctly.

restorecon -v /var/named/forwarders.conf
chmod -v 755 /var/named/forwarders.conf

BIND should be able to perform hostnames resolution under the domain we are using for OpenShift. We should create a dynamic database for this.

rm -rvf /var/named/dynamic
mkdir -vp /var/named/dynamic

Create a initial named database.

cat <<EOF > /var/named/dynamic/${domain}.db
\$ORIGIN .
\$TTL 1	; 1 seconds (for testing only)
${domain} IN SOA ns1.${domain}. hostmaster.${domain}. (
                         2011112904 ; serial
                         60         ; refresh (1 minute)
                         15         ; retry (15 seconds)
                         1800       ; expire (30 minutes)
                         10         ; minimum (10 seconds)
                          )
                     NS ns1.${domain}.
\$ORIGIN ${domain}.
ns1	              A        127.0.0.1

EOF

Next we should install the DNSSEC key for our domain.

cat <<EOF > /var/named/${domain}.key
key ${domain} {
  algorithm HMAC-MD5;
  secret "${KEY}";
};
EOF

Set permissions and SELinux contexts for created files.

chown -Rv named:named /var/named
restorecon -rv /var/named

Create a new /etc/named.conf with BIND configuration.

cat <<EOF > /etc/named.conf
// named.conf
//
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//

options {
    listen-on port 53 { any; };
    directory "/var/named";
    dump-file "/var/named/data/cache_dump.db";
    statistics-file "/var/named/data/named_stats.txt";
    memstatistics-file "/var/named/data/named_mem_stats.txt";
    allow-query { any; };
    recursion yes;

    /* Path to ISC DLV key */
    bindkeys-file "/etc/named.iscdlv.key";

    // set forwarding to the next nearest server (from DHCP response
    forward only;
    include "forwarders.conf";
};

logging {
    channel default_debug {
        file "data/named.run";
        severity dynamic;
    };
};

// use the default rndc key
include "/etc/rndc.key";
 
controls {
    inet 127.0.0.1 port 953
    allow { 127.0.0.1; } keys { "rndc-key"; };
};

include "/etc/named.rfc1912.zones";

include "${domain}.key";

zone "${domain}" IN {
    type master;
    file "dynamic/${domain}.db";
    allow-update { key ${domain} ; } ;
};
EOF

Set appropiated permissions:

chown -v root:named /etc/named.conf
chcon system_u:object_r:named_conf_t:s0 -v /etc/named.conf

Update /etc/resolv.conf to point it to the local named service. In the example the localhost is 10.0.0.1

nameserver 10.0.0.1

Make the service restart on reboot and set proper firewall rule.

chkconfig named on
lokkit --service=dns

Start named service to perform some updates inmediatly.

service named start

Set information about broker in BIND using nsupdate interactive console and setting correct localhost (in ex. is 10.0.0.1)

nsupdate -k ${keyfile}
> server 127.0.0.1
> update delete broker.example.com A
> update add broker.example.com 180 A 10.0.0.1
> send
> quit

Verify DNS configuration with some queries.

dig @127.0.0.1 broker.example.com
dig @127.0.0.1 google.com a
dig broker.example.com

Setting up DHCP

Modify /etc/dhcp/dhclient-eth0.conf to setup local BIND instance

prepend domain-name-servers 10.0.0.1;
supersede host-name "broker";
supersede domain-name "example.com";

Set hostname in /etc/sysconfig/network:

HOSTNAME=broker.example.com

And set hostname with hostname command.

hostname broker.example.com

Verify that the hostname was set correctly.

hostname

Setting up MongoDB

Install MongoDB package.

yum install mongodb-server

Configure MongoDB, add the next lines to /etc/mongodb.conf

auth = true
smallfiles = true

Enable mongod start on reboot, start it immediately

chkconfig mongod on
service mongod start

Verify that it's working correctly

mongo

Setting up ActiveMQ

Install ActiveMQ package with yum

yum install activemq

ActiveMQ can be configured in /etc/activemq/activemq.xml

cat <<EOF > /etc/activemq/activemq.xml
<!--
    Licensed to the Apache Software Foundation (ASF) under one or more
    contributor license agreements.  See the NOTICE file distributed with
    this work for additional information regarding copyright ownership.
    The ASF licenses this file to You under the Apache License, Version 2.0
    (the "License"); you may not use this file except in compliance with
    the License.  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

    Unless required by applicable law or agreed to in writing, software
    distributed under the License is distributed on an "AS IS" BASIS,
    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    See the License for the specific language governing permissions and
    limitations under the License.
-->
<beans
  xmlns="http://www.springframework.org/schema/beans"
  xmlns:amq="http://activemq.apache.org/schema/core"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
  http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">

    <!-- Allows us to use system properties as variables in this configuration file -->
    <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
        <property name="locations">
            <value>file:\${activemq.conf}/credentials.properties</value>
        </property>
    </bean>

    <!--
        The <broker> element is used to configure the ActiveMQ broker.
    -->
    <broker xmlns="http://activemq.apache.org/schema/core" brokerName="broker.example.com" dataDirectory="\${activemq.data}">

        <!--
            For better performances use VM cursor and small memory limit.
            For more information, see:

            http://activemq.apache.org/message-cursors.html

            Also, if your producer is "hanging", it's probably due to producer flow control.
            For more information, see:
            http://activemq.apache.org/producer-flow-control.html
        -->

        <destinationPolicy>
            <policyMap>
              <policyEntries>
                <policyEntry topic=">" producerFlowControl="true" memoryLimit="1mb">
                  <pendingSubscriberPolicy>
                    <vmCursor />
                  </pendingSubscriberPolicy>
                </policyEntry>
                <policyEntry queue=">" producerFlowControl="true" memoryLimit="1mb">
                  <!-- Use VM cursor for better latency
                       For more information, see:

                       http://activemq.apache.org/message-cursors.html

                  <pendingQueuePolicy>
                    <vmQueueCursor/>
                  </pendingQueuePolicy>
                  -->
                </policyEntry>
              </policyEntries>
            </policyMap>
        </destinationPolicy>


        <!--
            The managementContext is used to configure how ActiveMQ is exposed in
            JMX. By default, ActiveMQ uses the MBean server that is started by
            the JVM. For more information, see:

            http://activemq.apache.org/jmx.html
        -->
        <managementContext>
            <managementContext createConnector="false"/>
        </managementContext>

        <!--
            Configure message persistence for the broker. The default persistence
            mechanism is the KahaDB store (identified by the kahaDB tag).
            For more information, see:

            http://activemq.apache.org/persistence.html
        -->
        <persistenceAdapter>
            <kahaDB directory="\${activemq.data}/kahadb"/>
        </persistenceAdapter>

        <!-- add users for mcollective -->

        <plugins>
          <statisticsBrokerPlugin/>
          <simpleAuthenticationPlugin>
             <users>
               <authenticationUser username="mcollective" password="marionette" groups="mcollective,everyone"/>
               <authenticationUser username="admin" password="secret" groups="mcollective,admin,everyone"/>
             </users>
          </simpleAuthenticationPlugin>
          <authorizationPlugin>
            <map>
              <authorizationMap>
                <authorizationEntries>
                  <authorizationEntry queue=">" write="admins" read="admins" admin="admins" />
                  <authorizationEntry topic=">" write="admins" read="admins" admin="admins" />
                  <authorizationEntry topic="mcollective.>" write="mcollective" read="mcollective" admin="mcollective" />
                  <authorizationEntry queue="mcollective.>" write="mcollective" read="mcollective" admin="mcollective" />
                  <authorizationEntry topic="ActiveMQ.Advisory.>" read="everyone" write="everyone" admin="everyone"/>
                </authorizationEntries>
              </authorizationMap>
            </map>
          </authorizationPlugin>
        </plugins>

          <!--
            The systemUsage controls the maximum amount of space the broker will
            use before slowing down producers. For more information, see:
            http://activemq.apache.org/producer-flow-control.html
            If using ActiveMQ embedded - the following limits could safely be used:

        <systemUsage>
            <systemUsage>
                <memoryUsage>
                    <memoryUsage limit="20 mb"/>
                </memoryUsage>
                <storeUsage>
                    <storeUsage limit="1 gb"/>
                </storeUsage>
                <tempUsage>
                    <tempUsage limit="100 mb"/>
                </tempUsage>
            </systemUsage>
        </systemUsage>
        -->
          <systemUsage>
            <systemUsage>
                <memoryUsage>
                    <memoryUsage limit="64 mb"/>
                </memoryUsage>
                <storeUsage>
                    <storeUsage limit="100 gb"/>
                </storeUsage>
                <tempUsage>
                    <tempUsage limit="50 gb"/>
                </tempUsage>
            </systemUsage>
        </systemUsage>

        <!--
            The transport connectors expose ActiveMQ over a given protocol to
            clients and other brokers. For more information, see:

            http://activemq.apache.org/configuring-transports.html
        -->
        <transportConnectors>
            <transportConnector name="openwire" uri="tcp://0.0.0.0:61616"/>
            <transportConnector name="stomp" uri="stomp://0.0.0.0:61613"/>
        </transportConnectors>

    </broker>

    <!--
        Enable web consoles, REST and Ajax APIs and demos

        Take a look at \${ACTIVEMQ_HOME}/conf/jetty.xml for more details
    -->
    <import resource="jetty.xml"/>

</beans>
<!-- END SNIPPET: example -->
EOF

Open Firewall rules and enable activemq on restart.

lokkit --port=61613:tcp
chkconfig activemq on

And now you can start ActiveMQ service

service activemq start

Configure ActiveMQ console web service to allow only local requests in /etc/activemq/jetty.xml

sed -i -e '/name="authenticate"/s/false/true/' /etc/activemq/jetty.xml
sed -i -e '/name="port"/a<property name="host" value="127.0.0.1" />' /etc/activemq/jetty.xml

Edit /etc/activemq/jetty-realm.properties to setup the password for admin user

sed -i -e '/admin:/s/admin,/badpassword,/' /etc/activemq/jetty-realm.properties

Verify if ActiveMQ is working propetly.

curl --head --user admin:badpassword http://localhost:8161/admin/xml/topics.jsp

Setting up MCollective

Install MCollective package

yum install mcollective-client

Configure MCollective editing /etc/mcollective/client.cfg with the next content

topicprefix = /topic/
main_collective = mcollective
collectives = mcollective
libdir = /usr/libexec/mcollective
logfile = /var/log/mcollective-client.log
loglevel = debug

# Plugins
securityprovider = psk
plugin.psk = unset

connector = stomp
plugin.stomp.host = broker.example.com
plugin.stomp.port = 61613
plugin.stomp.user = mcollective
plugin.stomp.password = marionette

Setting up Broker application

Install all required packages for broker application

yum install openshift-origin-broker openshift-origin-broker-util rubygem-openshift-origin-auth-remote-user rubygem-openshift-origin-msg-broker-mcollective rubygem-openshift-origin-dns-bind

Setting up required services

All the required services should be enabled on reboot

chkconfig httpd on
chkconfig network on
chkconfig sshd on

Setting Standard SELinux Boolean Variables

Set the next SELinux boolean variables required

setsebool -P httpd_unified=on httpd_can_network_connect=on httpd_can_network_relay=on httpd_run_stickshift=on named_write_master_zones=on allow_ypbind=on

Relabel files and directories with the correct SELinux contexts

fixfiles -R rubygem-passenger restore
fixfiles -R mod_passenger restore
restorecon -rv /var/run
restorecon -rv /usr/share/rubygems/gems/passenger-*

Configuring domain

Ensure OpenShift is using the appropiate domain

sed -i -e "s/^CLOUD_DOMAIN=.*$/CLOUD_DOMAIN=${domain}/" /etc/openshift/broker.conf

Configuring plugins

Change to /etc/openshift/plugins.d directory

cd /etc/openshift/plugins.d

Enable the remote-user auth plug-in

cp openshift-origin-auth-remote-user.conf.example openshift-origin-auth-remote-user.conf

Enable the mcollective messaging plug-in

cp openshift-origin-msg-broker-mcollective.conf.example openshift-origin-msg-broker-mcollective.conf

Configure the dns-bind plug-in

cat <<EOF > openshift-origin-dns-bind.conf
BIND_SERVER="127.0.0.1"
BIND_PORT=53
BIND_KEYNAME="${domain}"
BIND_KEYVALUE="${KEY}"
BIND_ZONE="${domain}"
EOF

The dns-bind plug-in requires that an additional SELinux policy be compiled and installed

pushd /usr/share/selinux/packages/rubygem-openshift-origin-dns-bind/ && make -f /usr/share/selinux/devel/Makefile ; popd
semodule -i /usr/share/selinux/packages/rubygem-openshift-origin-dns-bind/dhcpnamedforward.pp

Setting up Authentication

Copy example file in Apache folder.

cp /var/www/openshift/broker/httpd/conf.d/openshift-origin-auth-remote-user-basic.conf.sample /var/www/openshift/broker/httpd/conf.d/openshift-origin-auth-remote-user.conf

Add a sample user 'username'

htpasswd -c /etc/openshift/htpasswd username

Configuring Inter-Host access keys

Generate a broker for key, this will be used by Jenkins and other plugins

openssl genrsa -out /etc/openshift/server_priv.pem 2048
openssl rsa -in /etc/openshift/server_priv.pem -pubout > /etc/openshift/server_pub.pem

We also need to generate a key pair for the broker to use to move gears between nodes:

ssh-keygen -t rsa -b 2048 -f ~/.ssh/rsync_id_rsa
cp ~/.ssh/rsync_id_rsa* /etc/openshift/

Configuring initial user account

Create an account in Mongo for the broker to use (choose a good password)

mongo openshift_broker_dev --eval 'db.addUser("openshift", "password")'

Edit /etc/openshift/broker.conf setting MONGO_PASSWORD with the password used for openshift user

Verify if the openshift user was created

echo 'db.system.users.find()' | mongo openshift_broker_dev

Running Broker Rails application

Change to application folder and run bundler

cd /var/www/openshift/broker
bundle --local

Configure broker to run on start and run it immediately

chkconfig openshift-broker on
service httpd start
service openshift-broker start

Setup proper firewall rules for broker application

lokkit --service=https
lokkit --service=http

Verify if broker is running correctly

curl -Ik https://localhost/broker/rest/api

Setting Node with Related Components

Setting up the OpenShift Node Repository

Node requires packages from the OpenShift Node repository and the OpenShift JBoss repository.

Create /etc/yum.repos.d/openshift-node.repo with the next content

[openshift_node]
name=OpenShift Node
baseurl=https://mirror.openshift.com/pub/origin-server/nightly/enterprise/2012-11-15/Node/x86_64/os/
enabled=1
gpgcheck=0

Create /etc/yum.repos.d/openshift-jboss.repo with the next content

[openshift_jbosseap]
name=OpenShift JBossEAP
baseurl=https://mirror.openshift.com/pub/origin-server/nightly/enterprise/2012-11-15/JBoss_EAP6_Cartridge/x86_64/os/
enabled=1
gpgcheck=0

And finally run the update command

yum update

Setting DNS record for Node in Broker DNS server

Add an entry in DNS record for host using the oo-register-dns command in Broker

keyfile=/var/named/example.com.key
oo-register-dns -h node -d example.com -n 10.0.0.2 -k ${keyfile}

Configuring Hostname Resolution

Edit the /etc/resolv.conf to set OpenShift DNS as DNS server

nameserver 10.0.0.1

Enabling Broker Access to node

Add broker ssh public key (saved in broker host in /root/.ssh/rsync_id_rsa.pub) to .ssh/authorized_keys in node, this is to allow to the broker access to node (in order to be able to move gears between nodes)

Setting up DHCP

Modify /etc/dhcp/dhclient-eth0.conf to setup DNS and hostname info

prepend domain-name-servers 10.0.0.1;
supersede host-name "node";
supersede domain-name "example.com";

Set hostname in /etc/sysconfig/network:

HOSTNAME=node.example.com

And set hostname with hostname command.

hostname node.example.com

Verify that the hostname was set correctly.

hostname

Setting up MCollective

Install MCollective packages

yum install mcollective openshift-origin-msg-node-mcollective

Then we edit /etc/mcollective/server.cfg to enable communication between node and broker

topicprefix = /topic/
main_collective = mcollective
collectives = mcollective
libdir = /usr/libexec/mcollective
logfile = /var/log/mcollective.log
loglevel = debug
daemonize = 1
direct_addressing = n
registerinterval = 30

# Plugins
securityprovider = psk
plugin.psk = unset
connector = stomp
plugin.stomp.host = broker.example.com
plugin.stomp.port = 61613
plugin.stomp.user = mcollective
plugin.stomp.password = marionette

# Facts
factsource = yaml
plugin.yaml = /etc/mcollective/facts.yaml

Make the service restart on reboot

chkconfig mcollective on

Start the mcollective service

service mcollective start

You can verify the communication between Broker and Node using mco command on Broker

mco ping

Setting up the Node

Install core packages for Node

yum install rubygem-openshift-origin-node rubygem-passenger-native openshift-origin-port-proxy openshift-origin-node-util

Install some cartridges, you can get a list of cartridges on Github repository. You can install all cartridges for Node also

yum install openshift-origin-cartridge-*

openshift-origin-cartridge-cron-1.4 cartridge installation is mandatory since it includes script required for updating the configuration for communication between nodes and brokers.

Setting up Required Services

Set proper firewall rules and enable services on reboot

lokkit --service=https
lokkit --service=http
chkconfig httpd on
chkconfig network on

Configuring cgroups

Configure cgroups by running the following commands:

cp -f /usr/share/doc/*/cgconfig.conf /etc/cgconfig.conf
restorecon -v /etc/cgconfig.conf
mkdir /cgroup
restorecon -v /cgroup
chkconfig cgconfig on
chkconfig cgred on
chkconfig openshift-cgroups on
service cgconfig restart
service cgred restart
service openshift-cgroups start

Configuring Disk Quotas

Disk quotas can be set in /etc/openshift/resource_limits.conf and should be enforced at fs level Enforcement is done adding usrquota option to the partition containing /var/lib/openshift in /etc/fstab. After of modify /etc/fstab remount the mount point edited. For example

mount -o remount /

Generate user quota info for the mount point

quotacheck -cmug /

Configuring SELinux

Configure SELinux policy for the node and fix SELinux contexts setting some booleans

setsebool -P httpd_unified=on httpd_can_network_connect=on httpd_can_network_relay=on httpd_read_user_content=on httpd_enable_homedirs=on httpd_run_stickshift=on allow_polyinstantiation=on

Relabel files with the proper SELinux contexts

fixfiles -R rubygem-passenger restore fixfiles -R mod_passenger restore restorecon -rv /var/run restorecon -rv /usr/share/rubygems/gems/passenger-* restorecon -rv /usr/sbin/mcollectived /var/log/mcollective.log /var/run/mcollectived.pid restorecon -rv /var/lib/openshift /etc/openshift/node.conf /etc/httpd/conf.d/openshift

Configuring sysctl settings

Open /etc/sysctl.conf and increase kernel semaphores to accommodate many httpds

kernel.sem = 250  32000 32  4096

Move ephemeral port range to accommodate application proxies

net.ipv4.ip_local_port_range = 15000 35530

Increase the connection-tracking table size

net.netfilter.nf_conntrack_max = 1048576

Reload sysctl.conf and activate the new settings

sysctl -p /etc/sysctl.conf

Configuring SSH

Edit /etc/ssh/sshd_config to add GIT_SSH as AcceptEnv

AcceptEnv GIT_SSH

Change max number of SSH connections

perl -p -i -e "s/^#MaxSessions .*$/MaxSessions 40/" /etc/ssh/sshd_config
perl -p -i -e "s/^#MaxStartups .*$/MaxStartups 40/" /etc/ssh/sshd_config

Configuring Port Proxy

Open the range of ports external that are allocated for application use

lokkit --port=35531-65535:tcp

Set the proxy service to start on boot and start service

chkconfig openshift-port-proxy on
service openshift-port-proxy start

The openshift-gears service script starts gears when a node host is rebooted

chkconfig openshift-gears on

Configuring Node settings

Edit /etc/openshift/node.conf and setup correct values for Node and Broker

PUBLIC_IP=10.0.0.2
CLOUD_DOMAIN=example.com
PUBLIC_HOSTNAME=node.example.com
BROKER_HOST=10.0.0.1

Updating Facter database

Facter generates metadata files for MCollective and is normally run by cron. Run facter now to make the initial database and ensure that it runs properly

/etc/cron.minutely/openshift-facts

Reboot system

Setting Client Workstation

Install OpenShift Client

gem install rhc

By default rhc connects to OpenShift Hosted Service, set LIBRA_SERVER to use it with a custom server

export LIBRA_SERVER=b01.us-e1.elevator.io

Setup client tools

rhc setup
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment