Skip to content

Instantly share code, notes, and snippets.

@cloudnull
Last active August 29, 2015 14:09
Show Gist options
  • Save cloudnull/fbcd4e371216af2fef31 to your computer and use it in GitHub Desktop.
Save cloudnull/fbcd4e371216af2fef31 to your computer and use it in GitHub Desktop.

RPC Repo Infrastructure

Date: 2013-09-05 09:51
tags:rackspace, lxc, rpc, openstack, cloud, ansible
category:*nix

This is a brief description on how / what the repo plays are providing and how the Rackspace Private Cloud repositories are being hosted.

Overview:
The repo infrastructure ideally exists within the rpc cloud deployment as containers however this is not a hard requirement. The repo playbooks were created in such a way that allows them to be deployed outside the rpc infrastructure as well.

Internals

  • NGINX serving all of the static content as well as running a git server for all repositories that are presently within the ansible-lxc-rpc system. This provides for a simple and fast web service that is easy to maintain.
  • The git repository is being served with the standard git-core cgi script and is available at <prefix>://<repo-url>/rpcgit. This should allow for a single place to get source code from when an environment does not have public Internet access.
  • All three of the nodes are kept in sync using lsync from the master node to the slaves. The lsync utility is a simple utility that wraps rsync via a lua plugin and simply ensures that slave nodes are all kept in sync.
  • All repository servers have an rpc_mirror rsync group setup and allow for anonymous read only rsync. The rsync group can be used as such: rsync -avz --progress <repository-uri>::rpc_mirror /path/to/folder
  • The first server built within the infrastructure and listed in ansible inventory is the "master" server and will have within it the running lsync daemon and several cron jobs to ensure packages are always built and up to date.
  • If the playbooks are used to deploy the repository servers on bare metal hosts, outside the rpc container infrastructure, it is up to the user to ensure host security. The repo playbooks will only provide for the application and it's relevant parts.

Setting up the repo in containers

To build the repository servers in the environment of a running RPC system added the following to the rpc_user_config.yml file:

repo_hosts:
  repo1:
    ip: 172.29.236.100

The repo hosts section lists all "Hosts Machines" that the repo containers will be built on. These entries have all of the same options as provided by everything else within the ansible-lxc-rpc deployment system. All relevant inventory and infrastructure bits will be built as needed throughout the deployment. The one thing to mention about the repo plays is that they can be run at anytime once the base setup plays have been completed, which largely do nothing but setup lxc on the physical host machines.

Once the environment is ready to receive the the containers ensure that all of the containers that are required are built:

ansible-playbooks -e @/etc/rpc_deploy/user_variables.yml playbooks/setup/build-containers.yml

Ensure that memcached is setup:

ansible-playbook -i inventory/repo-inventory.ini playbooks/infrastructure/memcached-install.yml

And now install all of the repo bits throughout the infrastructure:

ansible-playbooks -e @/etc/rpc_deploy/user_variables.yml playbooks/infrastructure/repo-install.yml

Once the play completes your repos will be ready for use from within the environment. The IP address of the repo will vary between deployments, please check lxc-ls -f for the repo containers for the container IP address of your nodes. If you are using the haproxy play please ensure that its been rerun with the updated container IP addresses for the repo containers.

HAProxy can be setup with the following play:

ansible-playbooks -e @/etc/rpc_deploy/user_variables.yml playbooks/infrastructure/haproxy-install.yml

Setting up the repo on baremetal

When using baremetal infrastructure you have two options:
  1. Update the environment yml file and set is_metal: true on the pkg_repo sections. This will enable "on metal" deployment of the repos though from within an rpc deployment using dynamic inventory.
  2. Create a basic ansible inventory file to manage the repos by themselves and outside of any deployed rpc environment.

If you go the route of modifying the environment yaml, the deployment is EXACTLY the same as doing it within containers. However if you create a basic ansible inventory file there are a few extra steps you need to follow.

Deploying on metal outside rpc

  • The first thing you will need to do is to kick a linux host with Ubuntu 14.04 running Kernel 3.13.0-32 or greater.
  • On all of your hosts make sure you have python 2.7 installed.
    • If you are bootstrapping all of your repo hosts from within one of the hosts install python-dev as well.
  • Create your basic ansible inventory file in the inventory directory normally located at /opt/ansible-lxc-rpc/rpc_deployment/inventory/repo-inventory.ini.
Example Inventory file:
[pkg_repo]
10.0.0.1 ansible_ssh_host=10.0.0.1
10.0.0.2 ansible_ssh_host=10.0.0.2
10.0.0.3 ansible_ssh_host=10.0.0.3

[memcached]
10.0.0.1 ansible_ssh_host=10.0.0.1

[memcached:vars]
is_metal=true
memcached_listen=10.0.0.1
required_kernel=3.13.0-32-generic
get_pip_url=http://mirror.rackspace.com/rackspaceprivatecloud/downloads/get-pip.py
rpc_repo_url=http://mirror.rackspace.com/rackspaceprivatecloud

[pkg_repo:vars]
memcached_encryption_key=ThisIsTheKey
With the inventory file in place execute the following:
  • move to the directory /opt/ansible-lxc-rpc/rpc_deployment/
  • Setup memcached:
ansible-playbook -i inventory/repo-inventory.ini playbooks/infrastructure/memcached-setup.yml
  • Setup the repository servers:
ansible-playbook -i inventory/repo-inventory.ini playbooks/infrastructure/repo-setup.yml
  • Seed your repo mirror with an upstream repo mirror:
ansible-playbook -i inventory/repo-inventory.ini playbooks/infrastructure/repo-clone-mirror.yml
# This command will using rsync to clone from the Rackspace upstream repo mirror.

If everything executed perfectly, you should now be able to explore your repository via a web browser and begin using it internally. If your on metal servers are in a walled garden and can not be accessed with a web browser lynx is available from within the repo servers and can be used as a CLI based web browser. The git repo is also automatically setup so you should be able to test it out using "git clone" on localhost. Example git clone http://127.0.0.1/rpcgit/keystone /tmp/keystone, if eveything executed correctly and your repo is web browse-able you are ready to being using it for all python packages as well as git repositories.

Using the repo infrastructure and the repo-playbooks

The repo infrastructure has several parts to it that allow you to build packages whenever is needed. everything done, package related from within the repo infrastructure is owned by the nginx user. All of the packages are built using this user as well as the 6 hour cron job which is executed out of the users crontab.

  • If the need arises to rebuild all of the packages change the user to nginx and execute /opt/rpc-wheel-builder.sh.
  • If the need arises to build the packages for a particular sha, branch, or tag export the release name into an environment variable and then execute the build script. IE: export RELEASES="<BRANCH||TAG||SHA>"; /opt/rpc-wheel-builder.sh.
  • Log files can be found for all wheel building actions at /var/log/rpc_wheel_builder.log
  • While the wheel building process is being executed a lock file is dropped at /var/run/wheel_builder.lock. If for some reason this file is present yet nothing is being built remove the file and execute the build script again.

Optional Things to think about

While not required I recommend that you setup some basic firewall rules. This is JUST AN EXAMPLE, please set the values to what you have within your infrastructure.

Example post-up firewall script located at /etc/network/if-up.d/firewall:
#!/usr/bin/env bash
# Load IP Tables Rules
/sbin/iptables-restore < /etc/iptables.rules
Make sure the script is executable:
chmod +x /etc/network/if-up.d/firewall
Example post-up firewall rules file located at /etc/iptables.rules:
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:LOGNDROP - [0:0]
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp --icmp-type echo-request -j ACCEPT
# memcached port
-A INPUT -s 10.0.0.1/32 -i eth1 -p tcp -m tcp --dport 11211 -j ACCEPT
-A INPUT -s 10.0.0.2/32 -i eth1 -p tcp -m tcp --dport 11211 -j ACCEPT
-A INPUT -s 10.0.0.3/32 -i eth1 -p tcp -m tcp --dport 11211 -j ACCEPT
# SSH port on internal network
-A INPUT -s 10.0.0.1/32 -i eth1 -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -s 10.0.0.2/32 -i eth1 -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -s 10.0.0.3/32 -i eth1 -p tcp -m tcp --dport 22 -j ACCEPT
# SSH port on external network
-A INPUT -i eth0 -p tcp -m tcp --dport 22 -j ACCEPT
# HTTP port on internal network
-A INPUT -i eth1 -p tcp -m tcp --dport 80 -j ACCEPT
# HTTPs port on internal network
-A INPUT -i eth1 -p tcp -m tcp --dport 443 -j ACCEPT
# RSYNC port on all networks
-A INPUT -p tcp -m tcp --dport 873 -j ACCEPT
-A INPUT -j DROP
-A LOGNDROP -p tcp -m limit --limit 5/min -j LOG --log-prefix "Denied TCP: " --log-level 7
-A LOGNDROP -p udp -m limit --limit 5/min -j LOG --log-prefix "Denied UDP: " --log-level 7
-A LOGNDROP -p icmp -m limit --limit 5/min -j LOG --log-prefix "Denied ICMP: " --log-level 7
-A LOGNDROP -j DROP
COMMIT
  • I Install fail2ban on my public repo servers, if you do nothing else please do this. Its simple and its better than nothing.
  • VNstat is a useful tool that can track bandwidth and graph the data over time.
  • Outside of the firewall you should setup some type of monitoring. If you don't have any monitoring in your environment you should have a look at rackspace cloud monitoring if your servers are within the Rackspace managed hosting infrastructure or newrelic.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment