Terragrunt provides a remote state example for S3. I started this morning by adapting that to GCS.
# root/terragrunt.hcl
remote_state {
backend = "gcs"
generate = {
path = "backend.tf"
if_exists = "overwrite_terragrunt"
}
config = {
bucket = "terraform-state"
prefix = "${path_relative_to_include()}"
credentials = "path/to/creds/file.json"
}
}
After running terragrunt plan-all
to recursively hit all my layers, this file was generated.
# root/path/to/layer/backend.tf
# Generated by Terragrunt. Sig: qqq
terraform {
backend "gcs" {
bucket = "terraform-state"
credentials = "path/to/creds/file.json"
prefix = "path/to/layer"
}
}
This error was also thrown for all layers.
[terragrunt] 2020/08/12 10:54:36 Encountered the following errors:
dialing: cannot read credentials file: open path/to/creds/file.json: no such file or directory
After messing around with a few different things, I realized I could just generate the backend without using Terragrunt's remote_state
.
# root/terragrunt.hcl
generate "backend" {
path = "backend.tf"
if_exists = "overwrite_terragrunt"
contents = <<EOF
terraform {
backend "gcs" {
prefix = "${path_relative_to_include()}"
bucket = "terraform-state"
credentials = "path/to/creds/file.json"
}
}
EOF
}
The same backend.tf
is generated but, since we're not using Terragrunt's remote_state
, it reads it as Terraform would.
If I've got time later, I'll dig around in the code to see why this is happening. Part of my issue is that I'm using a path outside of Terragrunt's control (I don't keep my service account creds in the same place as my TF). Terragrunt can also sometimes be brittle working with not AWS. If you've got a better approach or encountered the same issue, I'll love to hear about it!