Skip to content

Instantly share code, notes, and snippets.

@ryan0x44
Created August 25, 2015 01:14
Show Gist options
  • Save ryan0x44/a812e5e717f2d0eab9b3 to your computer and use it in GitHub Desktop.
Save ryan0x44/a812e5e717f2d0eab9b3 to your computer and use it in GitHub Desktop.
Blue-Green AWS Auto Scaling Deployments with Terraform
module "static" {
source = "./static"
# Variables from variables.tf
region = "${var.region}"
aws_access_key_id = "${var.aws_access_key_id}"
aws_secret_access_key = "${var.aws_secret_access_key}"
}
/* BLUE (Prefix with # to ENABLE, leave as /* to DISABLE)
module "blue" {
web_ami_id = "" # Get from Packer
source = "./blue-green"
blue_green_value = "blue"
# Pass outputs from static module as variables to blue-green module
aws_elb_lb_id = "${module.static.aws_elb_lb_id}"
aws_subnet_web_a_id = "${module.static.aws_subnet_web_a_id}"
aws_subnet_web_b_id = "${module.static.aws_subnet_web_b_id}"
aws_security_group_web_id = "${module.static.aws_security_group_web_id}"
aws_sns_topic_auto_scaling_arn = "${module.static.aws_sns_topic_auto_scaling_arn}"
}
/**/
#/* GREEN (Prefix with # to ENABLE, leave as /* to DISABLE)
module "green" {
web_ami_id = "ami-5d1c5f67" # Get from Packer
source = "./blue-green"
blue_green_value = "green"
# Pass outputs from static module as variables to blue-green module
aws_elb_lb_id = "${module.static.aws_elb_lb_id}"
aws_subnet_web_a_id = "${module.static.aws_subnet_web_a_id}"
aws_subnet_web_b_id = "${module.static.aws_subnet_web_b_id}"
aws_security_group_web_id = "${module.static.aws_security_group_web_id}"
aws_sns_topic_auto_scaling_arn = "${module.static.aws_sns_topic_auto_scaling_arn}"
}
/**/

A quick note on how I'm currently handling Blue/Green or A/B deployments with Terraform and AWS EC2 Auto Scaling.

In my particular use case, I want to be able to inspect an AMI deployment manually before disabling the previous deployment.

Hopefully someone finds this useful, and if you have and feedback please leave a comment or email me.

Overview

I build my AMI's using Packer and Ansible.

Every time I build a new AMI, I want to create a new Launch Configuration (LC) and Auto Scaling Group (ASG) running this AMI, and bring the fresh EC2 instances into circulation with my existing Load Balancer (ELB).

Finally, once I've verified the new ASG instances are working well, I will delete the old LC and ASG, which will shut down any instances running the older AMI.

Terraform Workflow

Here are the steps I follow to handle an AMI deployment: (in this example, we're switching from Blue to Green)

  1. Update AMI ID for green module
  2. Enable green module
  3. Terraform plan then apply
  4. Verify new AMI deployment is working
  5. Disable blue module
  6. Terraform plan then apply

Terraform Setup

My Terraform configuration has been split into two modules: static and blue-green.

The blue-green module contains configuration for:

  • Launch Configuration
  • Auto Scaling Group

The static module contains all other network configuration, such as the VPC and ELB.

Finally, these two modules are tied together with a modules.tf file in the main directory.

e.g:

  • /infrastructure
    • .terraform/
    • blue-green/
      • autoscaling.tf
      • launch_configuration.tf
      • variables.tf
    • static/
      • variables.tf
      • vpc.tf
      • sg.tf
      • elb.tf
      • outputs.tf
    • modules.tf
    • variables.tf

Where we run terraform plan and terraform apply in the infrastructure/ directory.

That's about it. I've included a copy of modules.tf for you to see how this part of the puzzle works :)

@roustem
Copy link

roustem commented Oct 9, 2015

Ryan, this is very helpful! Thank you for sharing!

@rokka-n
Copy link

rokka-n commented Oct 13, 2015

Ryan, is ASG configured the way to handle replacement gracefully?
For example if you have 3 nodes in ELB and you switched LC/ASG, you expect nodes to be replaced one by one?

Thanks.

@nvtkaszpir
Copy link

@rokka, it depends on the AWS implementation and the ASG behaviour, especially on the connection drainign settings and health check. You should make the overall time to check if instance healthy lower than connection draining.
This way new instances from updated ASG will be as In Service before instances from the old ASG to be taken out of service for the time of connection draining.

@ryan0x44
Copy link
Author

@rokka-n the way I use this is deploy green, wait for new servers to spin up and check they're working well and recognised by the ELB, then I disable blue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment