Skip to content

Instantly share code, notes, and snippets.

@seansawyer
Last active August 25, 2022 14:49
Show Gist options
  • Save seansawyer/8fe009e67f7e01344328 to your computer and use it in GitHub Desktop.
Save seansawyer/8fe009e67f7e01344328 to your computer and use it in GitHub Desktop.
Managing OpenStack instances with Ansible through an SSH bastion host

Managing OpenStack instances with Ansible through an SSH bastion host

I'm be using DreamCompute as my OpenStack provider, but there are dozens to choose from. I assume you already have Ansible and the OpenStack CLI tools installed.

Motivation

With the proliferation of OpenStack public clouds offering free and intro tiers, it's becoming very easy to effectively run a simple application for free or nearly free. Also with the emergence of Ansible, you don't need to learn and deploy complicated tools to do configuration management.

However, a typical limitation with free OpenStack offerings is that you don't have many public IPs to work with. This makes it a little annoying to manage your instances using a "push" configuration management tool like Ansible because you have run the tool from inside the private network.

Of course you could use something like Salt where an agent running on each instance connects back to a master process running on another instance on the private network. Salt and friends though (Chef, Puppet, ...) are much more complicated than Ansible, and I don't have a devops team or a lot of time to dedicated to a side project running on a couple of free VMs!

You could just install Ansible on one instance with a public IP, push your playbooks there, then SSH into that host to run playbooks. I don't particularly like this option because now I have to either install all the OpenStack CLI tools on that box too, or run Ansible on a remote host but OpenStack tools from my laptop. Also now anytime I make a change to a playbook, I have to push it to my remote Ansible box. Ansible is getting a lot more complicated all of a sudden...

SSH to the Rescue

Luckily SSH has a feature called agent forwarding that solves this problem. The folks at DualSpark addressed cobbled together the scant information on using this with Ansible and kindly wrote a blog post about it. Here I'm just going to tie that information together with a few more details to get an end-to-end example of how to get up and running quickly.

Initial Setup

From the OpenStack management UI, create a new security group. Allow ICMP and SSH for that security group. You may also want to create a new keypair and add it to the security group. If so, download the keypair and add it to your SSH agent (using ssh-add).

nova keypair-add
nova secgroup-create
nova secgroup-add-rule ICMP
nova secgroup-add-rule SSH

Download the OpenStack RC file from "Access and Security" section. This is just a convenient shell script to set some OpenStack-related environment variables. Source the script.

If you're setting this up on a free account (say DreamCompute) you probably only have a single floating IP. Considering this limitation, we're going to use one of our servers as both a web server (in this case) and an SSH bastion. If you're running a legitimate business, you'd probably create a dedicated jump box and pay for another IP.

Spin up a new instance on the security group you created earlier. To begin this will just be an SSH jump box, although later it will also turn into a web server (or whatever). Associate a floating IP to the instance.

nova instance-create
nova floatingip-associate

Now you need to be able to connect from this instance to other instances on the private network using SSH. Since I'm lazy I'm just going to push my OpenStack keypair up to the host.

scp yourkeypair.pem dhc-user@YOURFLOATINGIP:.ssh/id_rsa

In order for Ansible forwarding to work properly, you'll need to install netcat. We'll also go ahead and update everything to be sure we have any security patches, etc.

sudo yum -y update
sudo yum -y install nc

Ansible Configuration

Configure Ansible to connect to private IPs through the bastion (on your floating IP) as described here. This will allow us to manage all instances with Ansible by their private IPs, including the bastion host itself.

Add the private IPs of your instances to Ansible's inventory.

You should now be able to ping all of your instances. For DreamCompute, we connect as dhc-user (similar to ec2-user):

ansible -vvvv -i inventory.ini all -m ping -u dhc-user

A Test Playbook

Create a simple playbook to run on each instance. In this case we'll just enable some extra repositories for CentOS, install Python 3 via Software Collections, and create a new user to run our application.

OK now run your playbook!

ansible-playbook -i inventory.ini -u dhc-user playbook.yml
# Add the following lines to /etc/ansible/ansible.cfg
[defaults]
system_errors = False
host_key_checking = False
ask_sudo_pass = False
[ssh_connection]
ssh_args = -o ControlPersist=15m -F /etc/ansible/ssh-config -q
scp_if_ssh = True
ControlPath ~/.ssh/ansible-%r@%h:%p
[webservers]
10.10.10.2
10.10.10.4
---
- hosts: webservers
remote_user: dhc-user
sudo: yes
tasks:
- name: enable EPEL repo
yum: name=epel-release state=latest
- name: enable RPMForge
yum: name=http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el7.rf.x86_64.rpm state=present
- name: enable SCLs
yum: name=scl-utils state=latest
- name: install Python33 SCL
yum: name=https://www.softwarecollections.org/repos/rhscl/python33/epel-7-x86_64/noarch/rhscl-python33-epel-7-x86_64-1-2.noarch.rpm state=present
- name: install Python3
yum: name=python33 state=latest
# Either drop a .bashrc for dhc-user that enables the Python 3 SCL
# Or create another user that will run apps and have their .bashrc do this
# scl enable python33 bash
# Add your floating IP and keypair and place in /etc/ansible/ssh-config
Host 10.10.10.*
ServerAliveInterval 60
TCPKeepAlive yes
ProxyCommand ssh -q -A dhc-user@YOURFLOATINGIP -i /etc/ansible/yourkeypair.pem nc %h %p
ControlMaster auto
ControlPath ~/.ssh/ansible-%r@%h:%p
ControlPersist 8h
User dhc-user
@ffledgling
Copy link

@seansawyer I'm a little curious as to why both the ssh-config and the ansible.cfg file need to have the ssh options for ControlMaster, ControlPath and ControlPersist. Does ansible not respect the ssh-config?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment