Skip to content

Instantly share code, notes, and snippets.

@bramford
Created July 25, 2017 05:57
Show Gist options
  • Save bramford/d9f831ef79b7d77aac22b1c39b7cc146 to your computer and use it in GitHub Desktop.
Save bramford/d9f831ef79b7d77aac22b1c39b7cc146 to your computer and use it in GitHub Desktop.
A list of tasks to do when preparing to start a DevOps role (Ansible, AWS, docker, Jenkins, Debian, Python)

DevOps warmup

  1. Create a cloudformation template that provisions the following AWS resources:
  • VPC
  • Subnet
  • Internet gateway (with route table entry)
  • Security Group (to allow your IP in for SSH)
  • IAM role that allows full AWS Route53 access
  1. Write an Ansible playbook that
  • Runs the cloudformation template
  • Provisions a Debian Stretch EC2 instance:
    • In to the VPC created by cloudformation
    • Using the official Debian Stretch AMI for your region of choice
    • Has the IAM role created by cloudformation
  1. Write another Ansible playbook that sets up the EC2 instance Debian install:
  • Adds a new user
  • Installs docker and adds the new user to the docker group - Use the tekniqueltd.docker ansible role via ansible galaxy (also available on github)
  1. Create a Route53 domain for a domain or subdomain
  • If you don't already have a domain, you can either register a new domain or grab a free subdomain (google free subdomains)
  1. Write a python script that updates an A record (under your domain) with the current public IP of the EC2 instance (diy dynamic DNS)
  • Use the 'boto' library for AWS calls
  • Use the 'requests' library with https://ifconfig.co/ (or similar) to get your current public IP
  • Put it in a public git repository on github (create yourself an account if you don't have one)
  1. Write a Dockerfile to build a docker image that runs the above python script
  • Use debian:stretch as the base image
  • Dockerfile should be stored/versioned in the same git repository as the python logic
  1. Write a docker-compose file:
  • Creates a docker bridge network
  • Creates a docker storage volume
  • Runs jenkins as a docker container
    • With all jenkins data being stored in the storage volume
    • On the bridge network
    • Listening on TCP 80 (or ideally 443 with valid TLS provided by letsencrypt)
  1. Create a Jenkins job for the python script docker image
  • Clones the python script repo
  • Builds a docker image from the Dockerfile and tags it
  • Pushes the docker image with the tag to a hub.docker.com docker repository (create one)
  1. Create another Jenkins job for the python script docker image
  • Pulls the latest python script docker image
  • Runs it
  • Runs on a cron every minute
  1. Get the python script running using AWS Lambda (for a different A record)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment