# So we get the specific code for the provider
$ terraform init
$ terraform plan
$ terraform apply
$ terraform destroy # not a good idea
Terraform keeps the state of the infrastructure locally (more on that later) so if you make changes in your terraform code, it compares states and tells you what it will do.
We can use terraform graph
to get the dependency graph. And then we can visualize the graph:
# In OSX
$ brew install graphviz
$ terraform graph > graph.dot
$ dot -Tpdf graph.dot -o outfile.pdf
$ open outfile.pdf
We define variables like so:
variable "server_port" {
description = "The port the server will use for HTTP requests"
type = number
default = 8080
}
we put then in variables.tf
Terraform will ask you for the variables values when applying, unless you have defaults. You can pass the vars like so:
terraform plan -var "server_por=8080"
To use variables in your tf code, you use variable reference expressions: var.<VARIABLE_NAME>
.
You can also interpolate variables with ${...}
We also have outputs, which tf will print after applying the code.
If you are working on your own, keeping state locally is fine. If you work in a team and put everything under a repo, you can problems:
-
It’s too easy to forget to pull down the latest changes from version control before running Terraform or to push your latest changes to version control after running Terraform.
-
Most version control systems do not provide any form of locking that would prevent two team members from running terraform apply on the same state file at the same time.
-
All data in Terraform state files is stored in plain text.
Solution use remote backends (we have been using the local backend).
How do we solve the problems with that?
- TF will automaticaly push changes and load the state.
- TF will automatically lock (the remote backend supports that).
- Most backends support encryption in rest and in transit.
module: set of files within a folder.
.
├── modules
│ └── services
│ └── webserver-cluster
│ ├── README.md
│ ├── main.tf
│ ├── outputs.tf
│ ├── user-data.sh
│ └── variables.tf
├── prod
│ ├── data-stores
│ │ └── mysql
│ │ ├── README.md
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ └── services
│ └── webserver-cluster
│ ├── README.md
│ ├── main.tf
│ ├── outputs.tf
│ └── variables.tf
└── stage
├── data-stores
│ └── mysql
│ ├── README.md
│ ├── main.tf
│ ├── outputs.tf
│ └── variables.tf
└── services
└── webserver-cluster
├── README.md
├── main.tf
├── outputs.tf
└── variables.tf
You can use variables that you pass to the module. The code in stage will pass variables that point, among other things to the staging version of the state, or will use a different cluster name.
You can still have locals within your module. Those are well, local variables.
You use them with the expression: local.<NAME>
.
When you are using a module, you can point to a github URL and you can use tags. That way, you can point your stating code to version 0.0.2 and production to version 0.0.1. Then you test stating and when you are ready you switch production to the latest version.