Skip to content

Instantly share code, notes, and snippets.

@tommybobbins
Last active June 24, 2021 10:04
Show Gist options
  • Save tommybobbins/1d71e57b2aa229644e8aab2e99c522ec to your computer and use it in GitHub Desktop.
Save tommybobbins/1d71e57b2aa229644e8aab2e99c522ec to your computer and use it in GitHub Desktop.
Terraform Associate Revision
Write<->Plan<->Apply
[repo]<->[review]<->provision.
terraform->required_providers {}
->required_version = "~> 0.14"
terraform {
required_providers {
random = {
source = "hashicorp/random"
version = "3.0.0"
}
aws = {
source = "hashicorp/aws"
version = ">= 2.0.0"
}
}
required_version = "~> 0.14"
}
.terraform.lock.hcl is used to keep the provider version.
If Terraform did not find a lock file, it would download the latest versions of the providers that fulfill the version constraints you defined in the required_providers block.
# terraform plan
================
Does not modify the state file
+ resource added
- resource removed
~ resource updated in place.
It is important to consider that Terraform reads from data sources during the plan and apply phases and writes the result into the plan.
You can use the terraform_remote_state data source to use another Terraform workspace's output data.
data "terraform_remote_state" "vpc" {
backend = "local"
config = {
path = "../learn-terraform-data-sources-vpc/terraform.tfstate"
}
}
# terraform get
================
Download and install modules (optional, plan does this and more).
# terraform init
===============
Downloads auxillary components (modules + plugins).
Sets up backend for storing terraform state file.
-backend-config=path This can be either a path to an HCL file with key/value
assignments (same format as terraform.tfvars) or a
'key=value' format. This is merged with what is in the
configuration file. This can be specified multiple
times. The backend type must be in the configuration
itself.
A new provider can be added to a configuration -- either explicitly via a provider block or by adding a resource from that provider -- Terraform must initialize it before it can be used. Initialization downloads and installs the provider's plugin so that it can later be executed.
terraform init command will download and initialize any providers that are not already initialized.
NOTE: In Terraform v0.12 terraform init command cannot automatically download Third-party Plugins providers that are not distributed by HashiCorp, but in Terraform v0.13 terraform init command can do it automatically.
Connects to infrastructure (credentials are required during the init stage)
$ terraform init -upgrade # The -upgrade will upgrade all previously-selected plugins to the newest version that complies with the configuration's version constraints. Updates the .terraform.lock.hcl
terraform {
required_providers {
aws = ">= 3.1.0"
}f
}
$ terraform providers #prints information about the providers used in the current configuration.
# terraform apply
================
Deployment based on the state file.
-var 'foo=bar' Set a variable in the Terraform configuration. This
flag can be set multiple times.
-var-file=foo Set variables in the Terraform configuration from
a file. If "terraform.tfvars" or any ".auto.tfvars"
files are present, they will be automatically loaded
If a state file is present, but all the .tf files have been removed, terraform apply will remove all the configuration!
Environment variables will prevent the username/password from being written to the state files
# terraform destroy
=================
Destroy all resources
# terraform show
=================
The terraform show command is used to provide human-readable output from a state or plan file
# PROVIDER
========
provider "aws"
provider = keyword
aws = provider name
# RESOURCE
========
resource "aws_instance" "web" {
ami = "ami-a1b2c3d4"
instance_type = "t2.micro"
}
# DATA
====
fetches tracking details of an existing resource.
All files must end in .tf
Terraform looks for providers in terraform providers regisstry https://registry.terraform.io/browser/providers
# TERRAFORM STATE
===============
Performs resource tracking. terraform.tfstate - it is json.
terraform state mv [options] SOURCE DESTINATION
This command will move an item matched by the address given to the destination address. This command can also move to a destination address in a completely different state file.
This can be used for simple resource renaming, moving items to and from a module, moving entire modules, and more. And because this command can also move data to a completely new state, it can also be used for refactoring one configuration into multiple separately managed Terraform configurations.
$ terraform state list # command is used to list resources within a Terraform state.
# VARIABLES
============
Minimum variable is variable "variable_name" {}
Simple = string, number, boolean
Complex = list, set, map, object, tuple.
List: A sequence of values of the same type.
Map: A lookup table, matching keys to values, all of the same type.
Set: An unordered collection of unique values, all of the same type.
Variables can be set sensitive=true to prevent displaying in output.
Variables are referencesd in the form var.my-var-name
list = [ ]
If the same variable is assigned multiple values, Terraform uses the last value it finds, overriding any previous values.
Note that the same variable cannot be assigned multiple values within a single source.
Use the slice() function to get a subset of these lists.
The Terraform language will automatically convert number and bool values to string values when needed, and vice-versa as long as the string contains a valid representation of a number or boolean value.
Terraform loads variables in the following order, with later sources taking precedence over earlier ones:
* Environment variables
* The terraform.tfvars file, if present.
* The terraform.tfvars.json file, if present.
* Any *.auto.tfvars or *.auto.tfvars.json files, processed in lexical order of their filenames.
* Any -var and -var-file options on the command line, in the order they are provided.
# OUTPUT
=========
Return values, compare to function output.
$ terraform output -json
If value is sensitive, then output "output_name" will still output the value
$ terraform output db_password
mycoolmysqlpassword
# PROVISIONER
=============
Custom provisioning e.g local-exec, remote-exec. Can be run at creation time (default) or destroy time. Only use when userdata cannot be used as they are not tracked through terraform.tfstate.
A destroy-time provisioner within a resource that is tainted will not run.
# MODULES
==========
Downloaded or referenced from Terraform public registry, private registry. Associated configuration:
count
for_each
providers
depends_on
Outputs can be referenced using subnet_id = module.my-vpc-module.subnet_id
Module inputs
module "my-vpc-module" {
source = ".modules/vpc"
server_name = "us-east-1" # Input parameter
}
A module can not access all parent module variables; hence to pass variables to a child module, the calling module should pass specific values in the module block.
Additional provider configurations (those with the alias argument set) are never inherited automatically by child modules, and so must always be passed explicitly using the providers map.
Anyone can develop and distribute their own Terraform providers. These third-party providers must be manually installed, since terraform init cannot automatically download them ~/.terraform.d/plugins
Although provider configurations are shared between modules, each module must declare its own provider requirements, so that Terraform can ensure that there is a single version of the provider that is compatible with all modules in the configuration and to specify the source address that serves as the global (module-agnostic) identifier for a provider.
# TERRAFORM BUILT IN FUNCTIONS
===============================
join ("_", [ "foo", var.project_name ])
foo_bobbins
Other useful functions:
file
max
flatten
contains
$ terraform console # tryout the CLI
NOTE: If the given index is greater than the length of the list then the index is "wrapped around" by taking the index modulo the length of the list:
TYPE CONSTRAINTS
================
Simple: replicas=3, name="cluster2", backup=true # number, string, boolean
Complex: list, tuple, map, object
Collection: Multiple values of one primitive type can be grouped, e.g. list (type), map(type)
Structural: Multiple values of different primitive types to be grouped.
Any can be used as a placeholder for a primive type e.g.
variable "foo" {
type = mylist(any)
default = [1,42,7]
}
mylist will be numbers. Terraform would recognise these as numbers.
DYNAMIC BLOCKS
===============
A dynamic block acts much like a for expression, but produces nested blocks instead of a complex typed value. It iterates over a given complex value and generates a nested block for each element of that complex value.
Repeatable nested configuration blocks. Make code look cleaner.
dynamic "ingress" {
foreach = var.rules
content {
from_port = ingresss.value["port"]
to_port = ingress.value["port"]
protocol = ingresss.value["proto"]
cidr_blocks = ingress.value["cidrs"]
}
}
Note ingress is iterated over below and var.rules contains proto, port and cidrs mapping.
# terraform fmt
================
Formates all the .tf files for layout.
# terraform taint
=================
Taints a resource forcing it to be destroyed and created. Modifies the state file which causes the recreation workflow to take place. OTher resources my be modified following a terraform taint
$ terraform taint my_ec2_instances
$ terraform apply -replace=my_ec2_instances
# terraform import
===================
For an existing resource in AWS, import this into the state file. The associated code must already exist in a .tf file.
$ terraform import my_cool_ami arn:asdasdadaa
The import command can import resources into modules as well as directly into the root of your state.
# TERRAFORM BLOCK
==================
This is a special configuration block for controlling behaviour of Terraform. It should contain on constant values, named resources and variables. e.g.
terraform {
required_version = ">=0.13.0"
required_providers {
aws = ">= 3.0.0"
}
}
WORKSPACES
===========
$ terraform workspace new bobbins # Create and select workspace bobbins
$ terraform workspace select foo # Select workspace foo
For local state, Terraform stores the workspace states in a directory called terraform.tfstate.d. This directory should be treated similarly to local-only terraform.tfstate.
Workspaces can be referenced via ${terraform.workspacename}
To ensure that workspace names are stored correctly and safely in all backends, the name must be valid to use in a URL path segment without escaping.
Workspaces are technically equivalent to renaming your state file. They aren't any more complex than that. Terraform wraps this simple notion with a set of protections and support for remote state.
# DEBUGGING
==========
Write to STDERR:
TF_LOG variable Trace, Debug, Info, Warn, Error (TDIWE) (to die we).
TF_LOG_PATH="/var/tmp/tf_out_debug.txt"
Logging can be enabled separately for terraform itself and the provider plugins using the TF_LOG_CORE or TF_LOG_PROVIDER environment variables. These take the same level arguments as TF_LOG, but only activate a subset of the logs.
# SENTINEL
========
Embedded policy as code framework. Ensures adherence to policies. Used within Enterprise Terraform products (Terraform Cloud). Sentinel has it's own languages, which can be understood by non-programmers. Sentinel runs after plan and before apply.
Allows Sandboxing, Codification, version control, testing and automation adherence. CIS Standards, which resources are allowed (instances no bigger than t2.micro etc), and that standard security groups are used.
Sentinel is a policy as a code framework that’s integrated into Hashicorp enterprise products. Sentinel allows users to define policies that are enforced against infrastructure between the plan and apply phases of a Terraform run.
# VAULT
=====
Serves two purposes: Secrets used for services (e.g. RDS creds) and secrets which are injected with the plan/apply. Vault receives long term credentials and provides short-term credentials for building Infrastructure. Secrets are automatically rotated and are encrpyed at rest and in transit. Vault integrates into IAM.
Vault provider - terraform retrieves vault provider and then uses temporary keys for deployment.
Secrets are still persisted to the state file though, so be careful
# TERRAFORM REGISTRY
===================
Publicy available modules which are pulled in with terraform init.
Private module registrys available with Terraform cloud.
# CLOUD WORKSPACES
================
Workspaces hosted in a cloud instead of within different directories on a filesystem (terraform.tfstate.d) Segregation, Security and Storage of configuration. It also maintains a record of all execution activitiy. All terraform commands are executed on managed terraform cloud VMs.
===
|COMPONENT | WORKSPACE | CLOUD WORKSPACES |
===
|tf_config | Disk | Github/Gitlab etc|
|vars | .tfvars | Workspace |
|state | disk/s3 | Workspace |
|creds + | shell, envs| Workspace |
|secrets | + files | |
===
# TF CLOUD
=========
Collaboration, workspaces, remove tf execution, revsion control, remote state management, private tf module registry, cost estimates, Sentinel.
# ENTERPRISE EDITION
=====================
Enterprise edition is TF Cloud plus Clustering, Locally hosted Install, Private Network connectivity
CLHIPN
Terraform Enterprise is our self-hosted distribution of Terraform Cloud. It offers enterprises a private instance of the Terraform Cloud application, with no resource limits and with additional enterprise-grade architectural features like audit logging and SAML single sign-on.
# Module output
===============
module.my_module.aws_flibble.id
Private Registry Module Sources
===============================
Private registry modules have source strings of the form <HOSTNAME>/<NAMESPACE>/<NAME>/<PROVIDER>. This is the same format as the public registry, but with an added hostname prefix.
GitHub. The module must be on GitHub and must be a public repo. This is only a requirement for the public registry. If you're using a private registry, you may ignore this requirement.
* Named terraform-<PROVIDER>-<NAME>. Module repositories must use this three-part name format, where <NAME> reflects the type of infrastructure the module manages and <PROVIDER> is the main provider where it creates that infrastructure. The <NAME> segment can contain additional hyphens. Examples: terraform-google-vault or terraform-aws-ec2-instance.
* Repository description. The GitHub repository description is used to populate the short description of the module. This should be a simple one-sentence description of the module.
* Standard module structure. The module must adhere to the standard module structure. This allows the registry to inspect your module and generate documentation, track resource usage, parse submodules and examples, and more.
* x.y.z tags for releases. The registry uses tags to identify module versions. Release tag names must be a semantic version, which can optionally be prefixed with a v. For example, v1.0.4 and 0.9.2. To publish a module initially, at least one release tag must be present. Tags that don't look like version numbers are ignored.
Modules from git
================
module "vpc" {
source = "git::https://example.com/vpc.git"
}
module "storage" {
source = "git::ssh://username@example.com/storage.git"
}
Structural types
=================
* object(...): A collection of named attributes that each have their own type.
The schema for object types is { <KEY> = <TYPE>, <KEY> = <TYPE>, ... } — a pair of curly braces containing a comma-separated series of <KEY> = <TYPE> pairs. Values that match the object type must contain all of the specified keys, and the value for each key must match its specified type. (Values with additional keys can still match an object type, but the extra attributes are discarded during type conversion.)
For Example: An object type of object({ name=string, age=number }) would match a value like the following:
{
name = "John"
age = 52
}
* tuple(...): A sequence of elements identified by consecutive whole numbers starting with zero, where each element has its own type.
The schema for tuple types is [<TYPE>, <TYPE>, ...] — a pair of square brackets containing a comma-separated series of types. Values that match the tuple type must have exactly the same number of elements (no more and no fewer), and the value in each position must match the specified type for that position.
For Example: A tuple type of tuple([string, number, bool]) would match a value like the following:
["a", 15, true]
Connection block
=================
Connection blocks don't take a block label, and can be nested within either a resource or a provisioner.
resource "aws_instance" "quiz_experts" {
ami = "ami-04579a6a597"
instance_type = "t2.micro"
connection {
type = "ssh"
user = "ec2-user"
private_key = file("~/.ssh/terraform")
host = self.public_ip
}
Migrate to TF cloud
====================
Add a remote backend
terraform {
+ backend "remote" {
+ organization = "<ORG_NAME>"
+ workspaces {
+ name = "Example-Workspace"
+ }
+ }
Run a terraform login to request API token, yes when prompted.
API key required, this will open in a browser. Provides API key. Paste provided API key back into terminal.
terraform init
Do you want to copy existing state to new backend? Yes
Set workspace variables. Place AWS key/secret access key into Workspace Variables.
Run a terraform apply
tfstate will be populated remotely but no changes will need to be applied (tfstate remains the same).
rm terraform.tfstate
terraform refresh
=================
Refresh state - has been deprecated, now
$ terraform plan -refresh-only
$ terraform apply -refresh-only
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment