Skip to content

Instantly share code, notes, and snippets.

Created March 12, 2018 12:58
Show Gist options
  • Save jvanderhoof/f57b3adb7169e67c51692b3357a39bc5 to your computer and use it in GitHub Desktop.
Save jvanderhoof/f57b3adb7169e67c51692b3357a39bc5 to your computer and use it in GitHub Desktop.
Overview of how to secure SSH keys for use by Ansible

Ansible connects to remote machines using SSH. This leads to a challenge running Ansible at scale: How do we manage the private SSH keys Ansible uses to connect to the remote machines it manages? We can keep those keys on the Ansible Controller, but this makes them difficult to rotate, and makes the Controller a high value target in the network for attackers. Let's look at a better option: moving those SSH keys into a vault, and retrieve those keys only when they are needed for an Ansible playbook run.

Setting the Stage

Everyone's environment is going to look a bit different. Let's start by defining some context for our example environment: We have two applications: Foo and Bar, and two different environments: staging and production. Each application has a load balancer to manage traffic. In our production environment, applications Foo and Bar each have five application servers running behind the load balancer. In staging, each application has a single node. Each node for a given application and environment has the same SSH key pair. That means we have four key pairs: staging Foo, staging Bar, production Foo, and production Bar.

Getting Started

The easiest way to get started with Conjur is to try the hosted version. To get started, signup for a free hosted Conjur account (

Describing our Policy

Let's create a simple policy to allow our Ansible Controller to retrieve the four private SSH keys. We also want our admin to be able to update the private keys.

# ansible.yml 

- !policy 

  id: ansible 


  # define a YAML collection `keys` to hold our ssh key variables 

  - &keys 

    # create variables to hold the private key 

    - !variable staging/foo/ssh_private_key 

    - !variable staging/bar/ssh_private_key 

    - !variable production/foo/ssh_private_key 

    - !variable production/bar/ssh_private_key 


    # create a group to with permission to retrieve SSH keys 

    - !group secrets-users 


    # Give the `secrets-users` group read/execute privilege (read provides visibility, execute allows retrieval of the value) to the variables stored in the `keys` collection defined above 

    - !permit 

      role: !group secrets-users 

      privileges: [ read, execute ] 

      resource: *keys 


    # A layer defines a group of one or more machines. We'll use this group to give our Ansible Controller access to the above SSH private keys. 

    - !layer 


    # Define a host factory for this layer. A host factory allows us to generate a short lived, IP restricted token to auto-enroll our Ansible Controller into our Ansible layer 

    - !host-factory 

      layer: [ !layer ] 


    # Now let's give this layer the ability to retrieve the SSH private key 

    - !grant 

      member: !layer 

      role: !group secrets-users 

The above is a bare-bones policy to get us started. We can refactor this policy later to give us more flexibility and control.

Setting SSH private keys in Conjur

Now that we've created variables, let's add our private SSH key into Conjur.

# log into Conjur 

$ conjur variable values add ansible/staging/foo/ssh_private_key "$(cat ssh_keys/foo/staging_rsa)" 

$ conjur variable values add ansible/production/foo/ssh_private_key "$(cat ssh_keys/foo/production_rsa)" 

$ conjur variable values add ansible/staging/bar/ssh_private_key "$(cat ssh_keys/bar/staging_rsa)" 

$ conjur variable values add ansible/production/bar/ssh_private_key "$(cat ssh_keys/bar/production_rsa)" 

Now we have our policy that defines our SSH keys and gives our Ansible Controller permission to retrieve those keys. Next, we need to give our Ansible Controller an identity. We'll do this using Ansible.

First, install the Conjur Role:

$ ansible-galaxy install cyberark.conjur-host-identity 

Next, update the Ansible Controller playbook to include our Conjur role:

 # playbooks/ansible_controller.yml 

- hosts: ansible 


    - role: configure-conjur-identity 

      conjur_appliance_url: '', 

      conjur_account: 'myorg', 

      conjur_host_factory_token: "{{lookup('env', 'HFTOKEN')}}", 

      conjur_host_name: "{{inventory_hostname}}" 


You'll notice the environment variable HFTOKEN. We'll generate a Host Factory token for our Ansible layer and populate HFTOKEN prior to running this playbook. The Host Factory token will auto-enroll our Ansible Controller into the ansible layer.

A Host Factory Token is a short-lived key (optionally IP-restricted), used to auto-enroll a host (server) into a layer. Host Factory tokens allow automated systems to enroll new instances when they are provisioned without requiring human intervention.

In this example, we'll generate the token to be valid for 3 minutes and restricted to an IP subnet (

$ hf_token=$(conjur hostfactory tokens create --duration-minutes 3 --cidr ansible/ansible  | jq -r '.[0].token') 

With our Host Factory token generated, we have three minutes to enroll our Ansible Controller. Let's give it an identity!

$ HFTOKEN="$hf_token" ansible-playbook "playbooks/ansible_controller.yml" 

Our conjur Role does a couple of things:

Connects to Conjur, and using the generated Host Factory Token, creates a Conjur host and enrolls that host into the ansible layer (which has permission to retrieve the remote SSH keys).

Creates two files: /etc/conjur.conf (contains information about the location of Conjur), and /etc/conjur.identity (contains authentication information needed to retrieve secrets from Conjur).

Installs Summon, and the Summon-Conjur provider, which makes retrieving and providing secrets to a process a breeze.

Once the playbook has been run successfully, we're ready to update Ansible to use the keys stored in Conjur to connect to our remote hosts:

$ summon --yaml 'SSH_KEY: !var:file ansible/staging/foo/ssh_private_key' bash -c 'ansible-playbook --private-key $SSH_KEY playbook/applications/foo.yml' 

What's happening here? Let's run through the steps:

  1. Summon connects to Conjur, using the /etc/conjur.conf and /etc/conjur.indentity files for authentication.

  2. As we've give our host execute permission on for the Conjur variable ansible/staging/foo/ssh_private_key, Summon retrieves its value and creates a temporary file with the variable's contents. The temporary file's path name is stored in SSH_KEY variable.

  3. The temporary file is passed to the ansible-playbook process through the argument --private-key.

  4. Once the ansible-playbook process completes, the temporary file is removed from the system, leaving no trace of our SSH key.

Optionally Summon can be used with a secrets.yml file. For our Ansible example, our secrets.yml file might look like:


  SSH_KEY: !var:file ansible/production/foo/ssh_private_key  


  SSH_KEY: !var:file ansible/production/bar/ssh_private_key 


  SSH_KEY: !var:file ansible/staging/foo/ssh_private_key  


  SSH_KEY: !var:file ansible/staging/bar/ssh_private_key 

With the above secrets.yml file on our Ansible Controller, we can run our playbook as follows:

$ summon -e staging-foo bash -c 'ansible-playbook -i $SSH_KEY playbook/applications/foo.yml' 

Running in Production

We've illustrated this example using the Open Source, hosted version of Conjur. When using Conjur for managing credentials in production, we strongly recommend you run Conjur with SSL (using Nginx), or use Conjur Enterprise.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment