Skip to content

Instantly share code, notes, and snippets.

@Artemu
Created October 5, 2018 11:39
Show Gist options
  • Save Artemu/ea2f4f83324837d9a59f76a75c82896f to your computer and use it in GitHub Desktop.
Save Artemu/ea2f4f83324837d9a59f76a75c82896f to your computer and use it in GitHub Desktop.
Failing to delete
$ terraform destroy --target module.eks-staging
aws_vpc.eks-main: Refreshing state... (ID: vpc-xxxxxxxxxxxx)
aws_iam_role.eks-role: Refreshing state... (ID: EKS-Control_eks-staging)
aws_iam_role.worker-role: Refreshing state... (ID: EKS-Worker_eks-staging)
data.aws_availability_zones.available: Refreshing state...
aws_iam_instance_profile.worker-role-policy: Refreshing state... (ID: EKS-Worker_eks-staging)
aws_security_group.control-sg: Refreshing state... (ID: sg-xxxxxxxxxxx)
aws_subnet.eks[1]: Refreshing state... (ID: subnet-xxxxxxxxx)
aws_subnet.eks[0]: Refreshing state... (ID: subnet-xxxxxxxxx)
aws_subnet.eks[2]: Refreshing state... (ID: subnet-xxxxxxxxx)
aws_security_group.node-sg: Refreshing state... (ID: sg-xxxxxxxxx)
aws_elasticache_subnet_group.subnet_group: Refreshing state... (ID: eks-staging-group)
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
- destroy
Terraform will perform the following actions:
- module.eks-staging.aws_iam_instance_profile.worker-role-policy
- module.eks-staging.aws_iam_role.eks-role
- module.eks-staging.aws_iam_role.worker-role
- module.eks-staging.aws_security_group.control-sg
- module.eks-staging.aws_security_group.node-sg
- module.eks-staging.aws_subnet.eks[0]
- module.eks-staging.aws_subnet.eks[1]
- module.eks-staging.aws_subnet.eks[2]
- module.eks-staging.aws_vpc.eks-main
- module.eks-staging.module.elasticache.aws_elasticache_subnet_group.subnet_group
Plan: 0 to add, 0 to change, 10 to destroy.
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
module.eks-staging.module.elasticache.aws_elasticache_subnet_group.subnet_group: Destroying... (ID: eks-staging-group)
module.eks-staging.module.elasticache.aws_elasticache_subnet_group.subnet_group: Destruction complete after 0s
module.eks-staging.aws_subnet.eks[1]: Destroying... (ID: subnet-xxxxxxxxxx)
module.eks-staging.aws_subnet.eks[0]: Destroying... (ID: subnet-xxxxxxxxxx)
module.eks-staging.aws_subnet.eks[2]: Destroying... (ID: subnet-xxxxxxxxxx)
module.eks-staging.aws_subnet.eks[2]: Destruction complete after 1s
module.eks-staging.aws_subnet.eks[0]: Destruction complete after 1s
module.eks-staging.aws_subnet.eks[1]: Destruction complete after 1s
Error: Error applying plan:
1 error(s) occurred:
* module.eks-staging.output.cluster: Resource 'aws_eks_cluster.cluster' does not have attribute 'endpoint' for variable 'aws_eks_cluster.cluster.endpoint'
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment