Skip to content

Instantly share code, notes, and snippets.

@nocode99
Last active February 6, 2018 19:19
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save nocode99/4249ad7ee87e36098fdedfbecb7e227d to your computer and use it in GitHub Desktop.
Save nocode99/4249ad7ee87e36098fdedfbecb7e227d to your computer and use it in GitHub Desktop.
Terraform setup

Scope

This document will cover how to install terraform and configure your terraform environment. I also outlined additional steps to give an overview of how terraform works and benefits. Ultimately, we can concatenate some of these steps because it's fairly repetitious.

Terraform Installation

  1. Go to https://www.terraform.io/downloads.html and download the suitable version of terraform.
  2. Unpack the file and move the terraform binary to somewhere in your $PATH (ie on Linux/Mac /usr/local/bin) If you are using Windows, you can follow these instructions https://stackoverflow.com/questions/1618280/where-can-i-set-path-to-make-exe-on-windows
  3. Confirm terraform is working:
$ terraform --version
Terraform v0.11.3

S3 Bucket Setup

Terraform creates a state file after every run. The main goal is for terraform to be able to compare the state with the terraform config files to check if any changes need to be made. Terraform's best practice suggest storing these state files out of Git because if there are simultaneous people working in the same project, you could create problems. One option is to store the state file in S3 which will pull/push the files when issuing the terraform commands. There is also a locking mechanism to ensure only one process is making changes, but we will skip that for now.

You can choose to use an existing bucket or create a new one, but we will want versioning enabled on the bucket. For extra security measure, we should also have KMS Key ID for terraform to use. This will encrypt the state file for further protection.

AWS Environment

You can either choose to export Environment Variables or configure credential files. We will need to setup AWS access/secret keys on your machine since terraform uses the AWS API. Credential files: https://docs.aws.amazon.com/cli/latest/userguide/cli-config-files.html Environment Vars: https://docs.aws.amazon.com/cli/latest/userguide/cli-environment.html

Like many tools, terrafrom relies on the AWS SDK and with the credentials set above, terraform will automatically look up these credentials. For the sake of the excercise, the AWS credentials being used will need full access to S3, encrypt/decrypt access to the KMS key.

Version Control

For reference, all files outlined in this doc will have a file reference below as well.

We'll want to create a new repository or use an existing repo. Let's ensure we have an empty directory to work with in either scenario and create a folder s3_buckets. Let's also create a .gitignore as there are files we do not want to store in version control.

Terraform

In s3_buckets directory, let's create a terraform.tf file. NOTE: terraform automatically looks for all files with .tf extension and flattens the structure out within it's current working directory. It will create it's own dependency map and thus determines which order resources are created.

Initializing

Directory structure:

.
└── s3_buckets
    └── terraform.tf

terraform.tf

terraform {
  required_version = ">=0.11.3"
}

provider "aws" {
  region  = "us-east-1"  # I'm assuming you are in us-east-1, change this if necessary
  version = "1.8.0"
}

and now let's run terraform init inside the s3_buckets folder. This will initialize the directory and configure ready to use.

$ terraform init

Initializing provider plugins...
- Checking for available provider plugins on https://releases.hashicorp.com...
- Downloading plugin for provider "aws" (1.8.0)...

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

If successful, you should also now see a .terraform/ directory. This directory contains plugin information and is not stored in version control.

Configuring the remote state

This section will show how to configure terraform to store the state file in S3.

Let's create a remote_state.tf

terraform {
    backend "s3" {
        region      = "us-east-1"
        bucket      = "my_s3_bucket"
        key         = "terraform/s3_buckets/terraform.tfstate"
        encrypt     = 1
        kms_key_id  = "arn:aws:kms:us-east-1:123456789:key/abcdef1-abcd-def123-abcd-ef123456789"
    }
}

The bucket attribute should reference the S3 bucket created in the step above as well as the KMS Key used to encrypt the state file in S3.

Importing the resource

Let's create a new file called s3.tf #v1 NOTE: I created a test bucket via the console, but should you have any additional permissions to your S3 bucket, the output may be different.

s3.tf

resource "aws_s3_bucket" "kepler-tests" {
  bucket = "kepler-tests"
}

Now let's run terraform plan

$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
  
Terraform will perform the following actions:
  + aws_s3_bucket.kepler-tests
      id:                  <computed>
      acceleration_status: <computed>
      acl:                 "private"
      arn:                 <computed>
      bucket:              "kepler-tests"
      bucket_domain_name:  <computed>
      force_destroy:       "false"
      hosted_zone_id:      <computed>
      region:              <computed>
      request_payer:       <computed>
      versioning.#:        <computed>
      website_domain:      <computed>
      website_endpoint:    <computed>
      
Plan: 1 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

As you can see, it is looking to add a new S3 bucket, but that is not what we want. Since this bucket already exists, we can import the resource to our state file:

$ terraform import aws_s3_bucket.kepler-tests kepler-tests
aws_s3_bucket.kepler-tests: Importing from ID "kepler-tests"...
aws_s3_bucket.kepler-tests: Import complete!
  Imported aws_s3_bucket (ID: kepler-tests)
aws_s3_bucket.kepler-tests: Refreshing state... (ID: kepler-tests)

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

The command terraform import takes two arguments. The first argument is your resource type and the name. In our example, we are referencing aws_s3_bucket.kepler-tests. You'll notice the similarities to the first line in the s3.tf file. The 2nd argument kepler-tests is the name of the S3 bucket. Now let's run terraform plan again!

$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

aws_s3_bucket.kepler-tests: Refreshing state... (ID: kepler-tests)

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place
  
Terraform will perform the following actions:
  ~ aws_s3_bucket.kepler-tests
      acl:           "" => "private"
      force_destroy: "" => "false"
      
Plan: 0 to add, 1 to change, 0 to destroy.

------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

This time we see that terraform is making changes and it would not try to create the S3 bucket (Note: the creation would fail since the resource already exists in AWS). The changes here are applying some defaults that are not explicitly set when creating the S3 bucket in the console. Let's go ahead and modify s3.tf #v2 to include those options.

Let's go ahead and apply the changes. You'll be prompted to approve the change and will need to type in yes

$ terraform apply
aws_s3_bucket.kepler-tests: Refreshing state... (ID: kepler-tests)

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place
  
Terraform will perform the following actions:
  ~ aws_s3_bucket.kepler-tests
      acl:           "" => "private"
      force_destroy: "" => "false"
      
Plan: 0 to add, 1 to change, 0 to destroy.

Do you want to perform these actions?

  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.
  
  Enter a value: yes
  
aws_s3_bucket.kepler-tests: Modifying... (ID: kepler-tests)
  acl:           "" => "private"
  force_destroy: "" => "false"
aws_s3_bucket.kepler-tests: Modifications complete after 1s (ID: kepler-tests)

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

If we run terraform plan again, we will see no changes need to be made!

$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

aws_s3_bucket.kepler-tests: Refreshing state... (ID: kepler-tests)

------------------------------------------------------------------------
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.

Let's add the policy

Let's add the s3_policy.json (note: the account ID on line 8 is made up) to the S3 bucket and we'll need to modify s3.tf #v3 to include the line policy = "${file("${path.cwd}/s3_policy.json")}". This is terraform's variable declaration. We can run terraform plan one more time to be sure there is not going to be any unintended consequences:

$ terraform plan
Refreshing Terraform state in-memory prior to plan...

The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
aws_s3_bucket.kepler-tests: Refreshing state... (ID: kepler-tests)

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place
  
Terraform will perform the following actions:

  ~ aws_s3_bucket.kepler-tests
      policy: "" => "{\n   \"Version\": \"2012-10-17\",\n   \"Statement\": [\n      {\n         \"Sid\": \"ListAccess\",\n         \"Effect\": \"Allow\",\n         \"Principal\": {\n            \"AWS\": \"arn:aws
:iam::1234567890:root\"\n         },\n         \"Action\": [\n            \"s3:GetBucketLocation\",\n            \"s3:ListBucket\",\n            \"s3:GetObject\",\n            \"s3:PutObject\"\n         ],\n
      \"Resource\": [\n            \"arn:aws:s3:::kepler-tests\",\n            \"arn:aws:s3:::kepler-tests/*\"\n         ]\n      }\n   ]\n}\n\n"
      
Plan: 0 to add, 1 to change, 0 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

Again - terraform allows us to see what kind of changes we will be making and make sure there are no unintended consequences. We can run terraform apply and approve the change and now have an ability to validate changes prior to moving to production.

Conclusion

We should have a directory structure like below:

.
└── s3_buckets
    ├── remote_state.tf
    ├── s3_policy.json
    ├── s3.tf
    └── terraform.tf

and should have 3 versioned files of our terraform state file in the other S3 bucket. We also imported an existing S3 bucket and allowed us to apply a bucket policy. Since we are storing in this version control, we can use Pull Requests to validate changes and also use terraform to validate any changes to your AWS account.

# File should be named .gitignore
.terraform/
*.tfstate
*.tfstate.backup
# v1
resource "aws_s3_bucket" "kepler-tests" {
bucket = "kepler-tests"
}
# v2
resource "aws_s3_bucket" "kepler-tests" {
bucket = "kepler-tests"
acl = "private"
force_destroy = false
}
# v3
resource "aws_s3_bucket" "kepler-tests" {
bucket = "kepler-tests"
acl = "private"
force_destroy = false
policy = "${file("${path.cwd}/s3_policy.json")}"
}
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ReadWriteAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::12345789012:root"
},
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket",
"s3:GetObject",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::kepler-tests",
"arn:aws:s3:::kepler-tests/*"
]
}
]
}
terraform {
required_version = ">=0.11.3"
}
provider "aws" {
region = "us-east-1"
version = "1.8.0"
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment