Skip to content

Instantly share code, notes, and snippets.

@charity
Created April 14, 2016 00:29
Show Gist options
  • Star 26 You must be signed in to star a gist
  • Fork 2 You must be signed in to fork a gist
  • Save charity/28cbb58c913794b225afb8a0fefac542 to your computer and use it in GitHub Desktop.
Save charity/28cbb58c913794b225afb8a0fefac542 to your computer and use it in GitHub Desktop.
# file name terraform/modules/aws_vpc/vpc.tf
# first create the VPC.
# Prefix resources with var.name so we can have many environments trivially
resource "aws_vpc" "mod" {
cidr_block = "${var.cidr}"
enable_dns_hostnames = "${var.enable_dns_hostnames}"
enable_dns_support = "${var.enable_dns_support}"
tags {
Name = "${var.env}_vpc"
}
}
resource "aws_internet_gateway" "mod" {
vpc_id = "${aws_vpc.mod.id}"
tags {
Name = "${var.env}_igw"
}
}
# for each in the list of availability zones, create the public subnet
# and private subnet for that list index,
# then create an EIP and attach a nat_gateway for each one. and an aws route
# table should be created for each private subnet, and add the correct nat_gw
resource "aws_subnet" "private" {
vpc_id = "${aws_vpc.mod.id}"
cidr_block = "${element(split(",", var.private_ranges), count.index)}"
availability_zone = "${element(split(",", var.azs), count.index)}"
count = "${length(compact(split(",", var.private_ranges)))}"
tags {
Name = "${var.env}_private_${count.index}"
}
}
resource "aws_subnet" "public" {
vpc_id = "${aws_vpc.mod.id}"
cidr_block = "${element(split(",", var.public_ranges), count.index)}"
availability_zone = "${element(split(",", var.azs), count.index)}"
count = "${length(compact(split(",", var.public_ranges)))}"
tags {
Name = "${var.env}_public_${count.index}"
}
map_public_ip_on_launch = true
}
# refactor to take all the route {} sections out of routing tables,
# and turn them into associated aws_route resources
# so we can add vpc peering routes from specific environments.
resource "aws_route_table" "public" {
vpc_id = "${aws_vpc.mod.id}"
tags {
Name = "${var.env}_public_subnet_route_table"
}
}
# add a public gateway to each public route table
resource "aws_route" "public_gateway_route" {
route_table_id = "${aws_route_table.public.id}"
depends_on = ["aws_route_table.public"]
destination_cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.mod.id}"
}
resource "aws_eip" "nat_eip" {
count = "${length(split(",", var.public_ranges))}"
vpc = true
}
resource "aws_nat_gateway" "nat_gw" {
count = "${length(split(",", var.public_ranges))}"
allocation_id = "${element(aws_eip.nat_eip.*.id, count.index)}"
subnet_id = "${element(aws_subnet.public.*.id, count.index)}"
depends_on = ["aws_internet_gateway.mod"]
}
# for each of the private ranges, create a "private" route table.
resource "aws_route_table" "private" {
vpc_id = "${aws_vpc.mod.id}"
count = "${length(compact(split(",", var.private_ranges)))}"
tags {
Name = "${var.env}_private_subnet_route_table_${count.index}"
}
}
# add a nat gateway to each private subnet's route table
resource "aws_route" "private_nat_gateway_route" {
count = "${length(compact(split(",", var.private_ranges)))}"
route_table_id = "${element(aws_route_table.private.*.id, count.index)}"
destination_cidr_block = "0.0.0.0/0"
depends_on = ["aws_route_table.private"]
nat_gateway_id = "${element(aws_nat_gateway.nat_gw.*.id, count.index)}"
}
# gonna need a custom route association for each range too
resource "aws_route_table_association" "private" {
count = "${length(compact(split(",", var.private_ranges)))}"
subnet_id = "${element(aws_subnet.private.*.id, count.index)}"
route_table_id = "${element(aws_route_table.private.*.id, count.index)}"
}
resource "aws_route_table_association" "public" {
count = "${length(compact(split(",", var.public_ranges)))}"
subnet_id = "${element(aws_subnet.public.*.id, count.index)}"
route_table_id = "${aws_route_table.public.id}"
}
@mbravorus
Copy link

mbravorus commented Jul 4, 2016

What is the reason for provisioning multiple NAT Gateways per VPC? I mean, id you were rolling DIY NAT instances, you might be concerned about bandwidth limitations, but the whole point of these managed thingies is to get rid of those concerns, right? So it would follow that we only need one NAT gateway per VPC to handle all the private subnets' traffic. What am I missing? Is it solely for multiAZ redundancy?

@ktstevenson
Copy link

@mbravorus NAT Gateways are AZ specific. While it is possible to share a gateway between AZs, if the AZ the gateway lives in has an outage, everything using the gateway is affected. Paranoid engineering creates a gateway in each AZ where you need NAT services.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment