Created
October 14, 2021 21:53
-
-
Save sudhikan/f032126329884f9d1621f7dc5a821980 to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
cloudstrapper:~/magma-experimental/ens15to161/terraform #terraform init --upgrade | |
Upgrading modules... | |
Downloading github.com/magma/magma?ref=v1.6 for orc8r... | |
- orc8r in .terraform/modules/orc8r/orc8r/cloud/deploy/terraform/orc8r-aws | |
Downloading terraform-aws-modules/eks/aws 17.0.3 for orc8r.eks... | |
- orc8r.eks in .terraform/modules/orc8r.eks | |
- orc8r.eks.fargate in .terraform/modules/orc8r.eks/modules/fargate | |
- orc8r.eks.node_groups in .terraform/modules/orc8r.eks/modules/node_groups | |
Downloading terraform-aws-modules/vpc/aws 2.17.0 for orc8r.vpc... | |
- orc8r.vpc in .terraform/modules/orc8r.vpc | |
Downloading github.com/magma/magma?ref=v1.6 for orc8r-app... | |
- orc8r-app in .terraform/modules/orc8r-app/orc8r/cloud/deploy/terraform/orc8r-helm-aws | |
Initializing the backend... | |
Initializing provider plugins... | |
- terraform.io/builtin/terraform is built in to Terraform | |
- Finding latest version of hashicorp/null... | |
- Finding hashicorp/helm versions matching "~> 1.0"... | |
- Finding hashicorp/kubernetes versions matching ">= 1.11.1, ~> 1.11.1"... | |
- Finding hashicorp/template versions matching "~> 2.0"... | |
- Finding hashicorp/tls versions matching "~> 2.1"... | |
- Finding latest version of hashicorp/cloudinit... | |
- Finding hashicorp/aws versions matching ">= 2.6.0, >= 3.40.0"... | |
- Finding terraform-aws-modules/http versions matching ">= 2.4.1"... | |
- Finding hashicorp/local versions matching ">= 1.4.0"... | |
- Finding hashicorp/random versions matching "~> 2.1"... | |
- Using previously-installed hashicorp/template v2.2.0 | |
- Using previously-installed hashicorp/tls v2.2.0 | |
- Installing hashicorp/cloudinit v2.2.0... | |
- Installed hashicorp/cloudinit v2.2.0 (self-signed, key ID 34365D9472D7468F) | |
- Using previously-installed hashicorp/aws v3.63.0 | |
- Using previously-installed hashicorp/local v2.1.0 | |
- Using previously-installed hashicorp/random v2.3.1 | |
- Using previously-installed hashicorp/null v3.1.0 | |
- Using previously-installed hashicorp/helm v1.3.2 | |
- Installing hashicorp/kubernetes v1.11.4... | |
- Installed hashicorp/kubernetes v1.11.4 (self-signed, key ID 34365D9472D7468F) | |
- Installing terraform-aws-modules/http v2.4.1... | |
- Installed terraform-aws-modules/http v2.4.1 (self-signed, key ID B2C1C0641B6B0EB7) | |
Partner and community providers are signed by their developers. | |
If you'd like to know more about provider signing, you can read about it here: | |
https://www.terraform.io/docs/cli/plugins/signing.html | |
Terraform has made some changes to the provider dependency selections recorded | |
in the .terraform.lock.hcl file. Review those changes and commit them to your | |
version control system if they represent changes you intended to make. | |
╷ | |
│ Warning: Version constraints inside provider configuration blocks are deprecated | |
│ | |
│ on .terraform/modules/orc8r/orc8r/cloud/deploy/terraform/orc8r-aws/providers.tf line 19, in provider "random": | |
│ 19: version = "~> 2.1" | |
│ | |
│ Terraform 0.13 and earlier allowed provider version constraints inside the provider configuration block, but that is now deprecated and will be removed in a future | |
│ version of Terraform. To silence this warning, move the provider version constraint into the required_providers block. | |
╵ | |
Terraform has been successfully initialized! | |
You may now begin working with Terraform. Try running "terraform plan" to see | |
any changes that are required for your infrastructure. All Terraform commands | |
should now work. | |
If you ever set or change modules or backend configuration for Terraform, | |
rerun this command to reinitialize your working directory. If you forget, other | |
commands will detect it and remind you to do so if necessary. | |
cloudstrapper:~/magma-experimental/ens15to161/terraform #terraform refresh | |
module.orc8r.module.eks.null_resource.wait_for_cluster[0]: Refreshing state... [id=4351803845162643018] | |
module.orc8r.module.eks.random_pet.workers[0]: Refreshing state... [id=fleet-wren] | |
module.orc8r.tls_private_key.eks_workers[0]: Refreshing state... [id=4df16044e1ffde722e82429c587d8486ef801af1] | |
module.orc8r.aws_key_pair.eks_workers[0]: Refreshing state... [id=orc8r20211014202558930600000002] | |
module.orc8r.module.eks.aws_cloudwatch_log_group.this[0]: Refreshing state... [id=/aws/eks/orc8r/cluster] | |
module.orc8r.module.eks.aws_iam_policy.worker_autoscaling[0]: Refreshing state... [id=arn:aws:iam::007606123670:policy/eks-worker-autoscaling-orc8r2021101420361739920000000a] | |
module.orc8r.module.eks.aws_iam_role_policy_attachment.workers_autoscaling[0]: Refreshing state... [id=orc8r20211014203617347500000008-20211014203617780400000011] | |
module.orc8r.aws_efs_file_system.eks_pv: Refreshing state... [id=fs-4d4684bd] | |
module.orc8r.module.vpc.aws_vpc.this[0]: Refreshing state... [id=vpc-0af6659c710433534] | |
module.orc8r.aws_route53_zone.orc8r: Refreshing state... [id=Z00587723R7WYKJY5OXY7] | |
module.orc8r.aws_secretsmanager_secret.orc8r_secrets: Refreshing state... [id=arn:aws:secretsmanager:eu-west-2:007606123670:secret:orc8r-secrets-WB8jC1] | |
module.orc8r.module.eks.aws_iam_role.cluster[0]: Refreshing state... [id=orc8r20211014202558934600000003] | |
module.orc8r.module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSServicePolicy[0]: Refreshing state... [id=orc8r20211014202558934600000003-20211014202559413500000005] | |
module.orc8r.module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSClusterPolicy[0]: Refreshing state... [id=orc8r20211014202558934600000003-20211014202559436600000006] | |
module.orc8r.module.vpc.aws_eip.nat[0]: Refreshing state... [id=eipalloc-088d06076d1b98739] | |
module.orc8r-app.null_resource.orc8r_seed_secrets: Refreshing state... [id=6962597296648011302] | |
module.orc8r.module.vpc.aws_subnet.private[0]: Refreshing state... [id=subnet-0079bd310f7d57dc5] | |
module.orc8r.module.vpc.aws_subnet.database[2]: Refreshing state... [id=subnet-07b891afa5ac59609] | |
module.orc8r.aws_security_group.default: Refreshing state... [id=sg-044f08576d4a5f2d3] | |
module.orc8r.module.eks.aws_security_group.cluster[0]: Refreshing state... [id=sg-013d9e86f091d43a2] | |
module.orc8r.module.vpc.aws_route_table.public[0]: Refreshing state... [id=rtb-0f10214f1b0892b68] | |
module.orc8r.module.vpc.aws_internet_gateway.this[0]: Refreshing state... [id=igw-0aac241246e351bc1] | |
module.orc8r.module.vpc.aws_route_table.private[0]: Refreshing state... [id=rtb-0efe5357ea3bcf741] | |
module.orc8r.module.eks.aws_security_group.workers[0]: Refreshing state... [id=sg-0e5068e7d50642104] | |
module.orc8r.module.vpc.aws_subnet.database[0]: Refreshing state... [id=subnet-0317dda25ec3ba63a] | |
module.orc8r.module.vpc.aws_subnet.database[1]: Refreshing state... [id=subnet-0683366172b4e71fd] | |
module.orc8r.module.vpc.aws_subnet.private[1]: Refreshing state... [id=subnet-058d21b84bcf30907] | |
module.orc8r.module.vpc.aws_subnet.private[2]: Refreshing state... [id=subnet-04488cbe3ea884f1b] | |
module.orc8r.module.vpc.aws_subnet.public[0]: Refreshing state... [id=subnet-01d0fd9c53d4a7b4f] | |
module.orc8r.module.vpc.aws_subnet.public[1]: Refreshing state... [id=subnet-02bfb7eed52cc8c11] | |
module.orc8r.module.vpc.aws_subnet.public[2]: Refreshing state... [id=subnet-03e3b6f285c05fe87] | |
module.orc8r.module.eks.aws_security_group_rule.workers_ingress_cluster[0]: Refreshing state... [id=sgrule-3670891154] | |
module.orc8r.module.vpc.aws_route.public_internet_gateway[0]: Refreshing state... [id=r-rtb-0f10214f1b0892b681080289494] | |
module.orc8r.module.eks.aws_security_group_rule.cluster_https_worker_ingress[0]: Refreshing state... [id=sgrule-3588698154] | |
module.orc8r.module.eks.aws_security_group_rule.cluster_egress_internet[0]: Refreshing state... [id=sgrule-2233125216] | |
module.orc8r.module.eks.aws_security_group_rule.workers_egress_internet[0]: Refreshing state... [id=sgrule-525574342] | |
module.orc8r.module.eks.aws_security_group_rule.workers_ingress_self[0]: Refreshing state... [id=sgrule-3783539046] | |
module.orc8r.module.eks.aws_security_group_rule.workers_ingress_cluster_https[0]: Refreshing state... [id=sgrule-1435813245] | |
module.orc8r.module.vpc.aws_route_table_association.database[0]: Refreshing state... [id=rtbassoc-0d657b4f3bf39aad3] | |
module.orc8r.module.vpc.aws_route_table_association.database[1]: Refreshing state... [id=rtbassoc-06836a685ce411396] | |
module.orc8r.module.vpc.aws_route_table_association.database[2]: Refreshing state... [id=rtbassoc-080fd0751205f26f4] | |
module.orc8r.module.vpc.aws_db_subnet_group.database[0]: Refreshing state... [id=orc8r] | |
module.orc8r.module.vpc.aws_route_table_association.private[2]: Refreshing state... [id=rtbassoc-069ae6ba47208fe66] | |
module.orc8r.module.vpc.aws_route_table_association.public[2]: Refreshing state... [id=rtbassoc-0b5384116ea3674fa] | |
module.orc8r.module.vpc.aws_route_table_association.private[0]: Refreshing state... [id=rtbassoc-02b5b04ddf3e325c6] | |
module.orc8r.module.vpc.aws_route_table_association.private[1]: Refreshing state... [id=rtbassoc-025427fc75a27ad89] | |
module.orc8r.module.vpc.aws_nat_gateway.this[0]: Refreshing state... [id=nat-0b2525dd48f51e0aa] | |
module.orc8r.module.vpc.aws_route_table_association.public[0]: Refreshing state... [id=rtbassoc-05a2ed567726ff6c9] | |
module.orc8r.module.vpc.aws_route_table_association.public[1]: Refreshing state... [id=rtbassoc-07149834ecdc6fda9] | |
module.orc8r.aws_efs_mount_target.eks_pv_mnt[2]: Refreshing state... [id=fsmt-f8c6ae09] | |
module.orc8r.module.eks.aws_eks_cluster.this[0]: Refreshing state... [id=orc8r] | |
module.orc8r.aws_efs_mount_target.eks_pv_mnt[0]: Refreshing state... [id=fsmt-f4c6ae05] | |
module.orc8r.aws_efs_mount_target.eks_pv_mnt[1]: Refreshing state... [id=fsmt-f6c6ae07] | |
module.orc8r.aws_elasticsearch_domain.es[0]: Refreshing state... [id=arn:aws:es:eu-west-2:007606123670:domain/orc8r-es] | |
module.orc8r.module.vpc.aws_route.private_nat_gateway[0]: Refreshing state... [id=r-rtb-0efe5357ea3bcf7411080289494] | |
module.orc8r.aws_db_instance.default: Refreshing state... [id=orc8rdb] | |
module.orc8r.module.eks.aws_iam_role.workers[0]: Refreshing state... [id=orc8r20211014203617347500000008] | |
module.orc8r.module.eks.local_file.kubeconfig[0]: Refreshing state... [id=89df0f64e1d41b7520de987be686742c6d332b6f] | |
module.orc8r.module.eks.aws_iam_role_policy_attachment.workers_AmazonEKS_CNI_Policy[0]: Refreshing state... [id=orc8r20211014203617347500000008-2021101420361772960000000e] | |
module.orc8r.module.eks.aws_iam_instance_profile.workers[0]: Refreshing state... [id=orc8r2021101420361765350000000b] | |
module.orc8r.module.eks.aws_iam_role_policy_attachment.workers_AmazonEKSWorkerNodePolicy[0]: Refreshing state... [id=orc8r20211014203617347500000008-2021101420361773710000000f] | |
module.orc8r.module.eks.aws_iam_role_policy_attachment.workers_AmazonEC2ContainerRegistryReadOnly[0]: Refreshing state... [id=orc8r20211014203617347500000008-2021101420361772750000000d] | |
module.orc8r.aws_iam_role.efs_provisioner: Refreshing state... [id=EFSProvisionerRole20211014203617749900000010] | |
module.orc8r.aws_iam_role.external_dns: Refreshing state... [id=ExternalDNSRole2021101420361771280000000c] | |
module.orc8r.module.eks.aws_launch_configuration.workers[0]: Refreshing state... [id=orc8r-wg-120211014203620138100000012] | |
module.orc8r.aws_iam_role_policy_attachment.efs_provisioner: Refreshing state... [id=EFSProvisionerRole20211014203617749900000010-20211014203625798200000014] | |
module.orc8r.aws_iam_role_policy.external_dns: Refreshing state... [id=ExternalDNSRole2021101420361771280000000c:terraform-20211014203625684400000013] | |
module.orc8r.aws_elasticsearch_domain_policy.es_management_access[0]: Refreshing state... [id=esd-policy-orc8r-es] | |
module.orc8r.module.eks.aws_autoscaling_group.workers[0]: Refreshing state... [id=orc8r-wg-120211014203626650900000015] | |
module.orc8r-app.helm_release.external_dns: Refreshing state... [id=external-dns] | |
module.orc8r-app.kubernetes_cluster_role_binding.tiller[0]: Refreshing state... [id=tiller] | |
module.orc8r-app.kubernetes_namespace.orc8r: Refreshing state... [id=orc8r] | |
module.orc8r-app.kubernetes_namespace.monitoring: Refreshing state... [id=monitoring] | |
module.orc8r-app.kubernetes_service_account.tiller[0]: Refreshing state... [id=kube-system/tiller] | |
module.orc8r-app.helm_release.efs_provisioner: Refreshing state... [id=efs-provisioner] | |
module.orc8r.module.eks.kubernetes_config_map.aws_auth[0]: Refreshing state... [id=kube-system/aws-auth] | |
module.orc8r-app.helm_release.elasticsearch_curator[0]: Refreshing state... [id=elasticsearch-curator] | |
module.orc8r-app.kubernetes_secret.orc8r_certs: Refreshing state... [id=orc8r/orc8r-certs] | |
module.orc8r-app.kubernetes_secret.fluentd_certs: Refreshing state... [id=orc8r/fluentd-certs] | |
module.orc8r-app.kubernetes_secret.nms_certs[0]: Refreshing state... [id=orc8r/nms-certs] | |
module.orc8r-app.kubernetes_secret.artifactory: Refreshing state... [id=orc8r/artifactory] | |
module.orc8r-app.kubernetes_secret.orc8r_configs: Refreshing state... [id=orc8r/orc8r-configs] | |
module.orc8r-app.kubernetes_secret.orc8r_envdir: Refreshing state... [id=orc8r/orc8r-envdir] | |
module.orc8r-app.helm_release.fluentd[0]: Refreshing state... [id=fluentd] | |
module.orc8r-app.kubernetes_persistent_volume_claim.storage["grafanaproviders"]: Refreshing state... [id=orc8r/grafanaproviders] | |
module.orc8r-app.kubernetes_persistent_volume_claim.storage["promcfg"]: Refreshing state... [id=orc8r/promcfg] | |
module.orc8r-app.kubernetes_persistent_volume_claim.storage["promdata"]: Refreshing state... [id=orc8r/promdata] | |
module.orc8r-app.kubernetes_persistent_volume_claim.storage["grafanadashboards"]: Refreshing state... [id=orc8r/grafanadashboards] | |
module.orc8r-app.kubernetes_persistent_volume_claim.storage["grafanadata"]: Refreshing state... [id=orc8r/grafanadata] | |
module.orc8r-app.kubernetes_persistent_volume_claim.storage["grafanadatasources"]: Refreshing state... [id=orc8r/grafanadatasources] | |
module.orc8r-app.kubernetes_persistent_volume_claim.storage["openvpn"]: Refreshing state... [id=orc8r/openvpn] | |
module.orc8r-app.helm_release.lte-orc8r[0]: Refreshing state... [id=lte-orc8r] | |
module.orc8r-app.helm_release.orc8r: Refreshing state... [id=orc8r] | |
╷ | |
│ Warning: Version constraints inside provider configuration blocks are deprecated | |
│ | |
│ on .terraform/modules/orc8r/orc8r/cloud/deploy/terraform/orc8r-aws/providers.tf line 19, in provider "random": | |
│ 19: version = "~> 2.1" | |
│ | |
│ Terraform 0.13 and earlier allowed provider version constraints inside the provider configuration block, but that is now deprecated and will be removed in a future | |
│ version of Terraform. To silence this warning, move the provider version constraint into the required_providers block. | |
╵ | |
Outputs: | |
nameservers = tolist([ | |
"ns-1064.awsdns-05.org", | |
"ns-1541.awsdns-00.co.uk", | |
"ns-161.awsdns-20.com", | |
"ns-871.awsdns-44.net", | |
]) | |
cloudstrapper:~/magma-experimental/ens15to161/terraform #terraform apply | |
module.orc8r.module.eks.null_resource.wait_for_cluster[0]: Refreshing state... [id=4351803845162643018] | |
module.orc8r.tls_private_key.eks_workers[0]: Refreshing state... [id=4df16044e1ffde722e82429c587d8486ef801af1] | |
module.orc8r.module.eks.random_pet.workers[0]: Refreshing state... [id=fleet-wren] | |
module.orc8r.aws_key_pair.eks_workers[0]: Refreshing state... [id=orc8r20211014202558930600000002] | |
module.orc8r.module.eks.aws_cloudwatch_log_group.this[0]: Refreshing state... [id=/aws/eks/orc8r/cluster] | |
module.orc8r.aws_efs_file_system.eks_pv: Refreshing state... [id=fs-4d4684bd] | |
module.orc8r.module.eks.aws_iam_role_policy_attachment.workers_autoscaling[0]: Refreshing state... [id=orc8r20211014203617347500000008-20211014203617780400000011] | |
module.orc8r.aws_secretsmanager_secret.orc8r_secrets: Refreshing state... [id=arn:aws:secretsmanager:eu-west-2:007606123670:secret:orc8r-secrets-WB8jC1] | |
module.orc8r.module.eks.aws_iam_policy.worker_autoscaling[0]: Refreshing state... [id=arn:aws:iam::007606123670:policy/eks-worker-autoscaling-orc8r2021101420361739920000000a] | |
module.orc8r.module.vpc.aws_vpc.this[0]: Refreshing state... [id=vpc-0af6659c710433534] | |
module.orc8r.aws_route53_zone.orc8r: Refreshing state... [id=Z00587723R7WYKJY5OXY7] | |
module.orc8r.module.eks.aws_iam_role.cluster[0]: Refreshing state... [id=orc8r20211014202558934600000003] | |
module.orc8r.module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSClusterPolicy[0]: Refreshing state... [id=orc8r20211014202558934600000003-20211014202559436600000006] | |
module.orc8r.module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSServicePolicy[0]: Refreshing state... [id=orc8r20211014202558934600000003-20211014202559413500000005] | |
module.orc8r.module.vpc.aws_eip.nat[0]: Refreshing state... [id=eipalloc-088d06076d1b98739] | |
module.orc8r-app.null_resource.orc8r_seed_secrets: Refreshing state... [id=6962597296648011302] | |
module.orc8r.module.vpc.aws_subnet.private[2]: Refreshing state... [id=subnet-04488cbe3ea884f1b] | |
module.orc8r.module.eks.aws_security_group.cluster[0]: Refreshing state... [id=sg-013d9e86f091d43a2] | |
module.orc8r.aws_security_group.default: Refreshing state... [id=sg-044f08576d4a5f2d3] | |
module.orc8r.module.vpc.aws_route_table.public[0]: Refreshing state... [id=rtb-0f10214f1b0892b68] | |
module.orc8r.module.vpc.aws_subnet.database[0]: Refreshing state... [id=subnet-0317dda25ec3ba63a] | |
module.orc8r.module.vpc.aws_internet_gateway.this[0]: Refreshing state... [id=igw-0aac241246e351bc1] | |
module.orc8r.module.vpc.aws_route_table.private[0]: Refreshing state... [id=rtb-0efe5357ea3bcf741] | |
module.orc8r.module.vpc.aws_subnet.public[2]: Refreshing state... [id=subnet-03e3b6f285c05fe87] | |
module.orc8r.module.vpc.aws_subnet.private[0]: Refreshing state... [id=subnet-0079bd310f7d57dc5] | |
module.orc8r.module.vpc.aws_subnet.private[1]: Refreshing state... [id=subnet-058d21b84bcf30907] | |
module.orc8r.module.vpc.aws_subnet.database[1]: Refreshing state... [id=subnet-0683366172b4e71fd] | |
module.orc8r.module.vpc.aws_subnet.database[2]: Refreshing state... [id=subnet-07b891afa5ac59609] | |
module.orc8r.module.vpc.aws_route.public_internet_gateway[0]: Refreshing state... [id=r-rtb-0f10214f1b0892b681080289494] | |
module.orc8r.module.vpc.aws_subnet.public[1]: Refreshing state... [id=subnet-02bfb7eed52cc8c11] | |
module.orc8r.module.vpc.aws_subnet.public[0]: Refreshing state... [id=subnet-01d0fd9c53d4a7b4f] | |
module.orc8r.module.eks.aws_security_group_rule.cluster_egress_internet[0]: Refreshing state... [id=sgrule-2233125216] | |
module.orc8r.module.eks.aws_security_group.workers[0]: Refreshing state... [id=sg-0e5068e7d50642104] | |
module.orc8r.module.vpc.aws_route_table_association.private[1]: Refreshing state... [id=rtbassoc-025427fc75a27ad89] | |
module.orc8r.module.vpc.aws_route_table_association.private[2]: Refreshing state... [id=rtbassoc-069ae6ba47208fe66] | |
module.orc8r.module.vpc.aws_route_table_association.private[0]: Refreshing state... [id=rtbassoc-02b5b04ddf3e325c6] | |
module.orc8r.module.eks.aws_security_group_rule.workers_ingress_cluster[0]: Refreshing state... [id=sgrule-3670891154] | |
module.orc8r.module.eks.aws_security_group_rule.workers_egress_internet[0]: Refreshing state... [id=sgrule-525574342] | |
module.orc8r.module.eks.aws_security_group_rule.workers_ingress_cluster_https[0]: Refreshing state... [id=sgrule-1435813245] | |
module.orc8r.module.vpc.aws_route_table_association.public[0]: Refreshing state... [id=rtbassoc-05a2ed567726ff6c9] | |
module.orc8r.module.eks.aws_security_group_rule.cluster_https_worker_ingress[0]: Refreshing state... [id=sgrule-3588698154] | |
module.orc8r.module.eks.aws_security_group_rule.workers_ingress_self[0]: Refreshing state... [id=sgrule-3783539046] | |
module.orc8r.module.vpc.aws_nat_gateway.this[0]: Refreshing state... [id=nat-0b2525dd48f51e0aa] | |
module.orc8r.module.vpc.aws_route_table_association.public[1]: Refreshing state... [id=rtbassoc-07149834ecdc6fda9] | |
module.orc8r.module.vpc.aws_route_table_association.public[2]: Refreshing state... [id=rtbassoc-0b5384116ea3674fa] | |
module.orc8r.module.vpc.aws_route_table_association.database[1]: Refreshing state... [id=rtbassoc-06836a685ce411396] | |
module.orc8r.module.vpc.aws_db_subnet_group.database[0]: Refreshing state... [id=orc8r] | |
module.orc8r.module.vpc.aws_route_table_association.database[2]: Refreshing state... [id=rtbassoc-080fd0751205f26f4] | |
module.orc8r.module.vpc.aws_route_table_association.database[0]: Refreshing state... [id=rtbassoc-0d657b4f3bf39aad3] | |
module.orc8r.aws_efs_mount_target.eks_pv_mnt[1]: Refreshing state... [id=fsmt-f6c6ae07] | |
module.orc8r.aws_efs_mount_target.eks_pv_mnt[2]: Refreshing state... [id=fsmt-f8c6ae09] | |
module.orc8r.aws_efs_mount_target.eks_pv_mnt[0]: Refreshing state... [id=fsmt-f4c6ae05] | |
module.orc8r.aws_elasticsearch_domain.es[0]: Refreshing state... [id=arn:aws:es:eu-west-2:007606123670:domain/orc8r-es] | |
module.orc8r.module.eks.aws_eks_cluster.this[0]: Refreshing state... [id=orc8r] | |
module.orc8r.module.vpc.aws_route.private_nat_gateway[0]: Refreshing state... [id=r-rtb-0efe5357ea3bcf7411080289494] | |
module.orc8r.module.eks.local_file.kubeconfig[0]: Refreshing state... [id=89df0f64e1d41b7520de987be686742c6d332b6f] | |
module.orc8r.module.eks.aws_iam_role.workers[0]: Refreshing state... [id=orc8r20211014203617347500000008] | |
module.orc8r.aws_db_instance.default: Refreshing state... [id=orc8rdb] | |
module.orc8r.module.eks.aws_iam_role_policy_attachment.workers_AmazonEKS_CNI_Policy[0]: Refreshing state... [id=orc8r20211014203617347500000008-2021101420361772960000000e] | |
module.orc8r.module.eks.aws_iam_role_policy_attachment.workers_AmazonEC2ContainerRegistryReadOnly[0]: Refreshing state... [id=orc8r20211014203617347500000008-2021101420361772750000000d] | |
module.orc8r.module.eks.aws_iam_instance_profile.workers[0]: Refreshing state... [id=orc8r2021101420361765350000000b] | |
module.orc8r.module.eks.aws_iam_role_policy_attachment.workers_AmazonEKSWorkerNodePolicy[0]: Refreshing state... [id=orc8r20211014203617347500000008-2021101420361773710000000f] | |
module.orc8r.aws_iam_role.efs_provisioner: Refreshing state... [id=EFSProvisionerRole20211014203617749900000010] | |
module.orc8r.aws_iam_role.external_dns: Refreshing state... [id=ExternalDNSRole2021101420361771280000000c] | |
module.orc8r.module.eks.aws_launch_configuration.workers[0]: Refreshing state... [id=orc8r-wg-120211014203620138100000012] | |
module.orc8r.aws_iam_role_policy_attachment.efs_provisioner: Refreshing state... [id=EFSProvisionerRole20211014203617749900000010-20211014203625798200000014] | |
module.orc8r.aws_iam_role_policy.external_dns: Refreshing state... [id=ExternalDNSRole2021101420361771280000000c:terraform-20211014203625684400000013] | |
module.orc8r.aws_elasticsearch_domain_policy.es_management_access[0]: Refreshing state... [id=esd-policy-orc8r-es] | |
module.orc8r.module.eks.kubernetes_config_map.aws_auth[0]: Refreshing state... [id=kube-system/aws-auth] | |
module.orc8r-app.helm_release.efs_provisioner: Refreshing state... [id=efs-provisioner] | |
module.orc8r.module.eks.aws_autoscaling_group.workers[0]: Refreshing state... [id=orc8r-wg-120211014203626650900000015] | |
module.orc8r-app.helm_release.external_dns: Refreshing state... [id=external-dns] | |
module.orc8r-app.kubernetes_namespace.orc8r: Refreshing state... [id=orc8r] | |
module.orc8r-app.kubernetes_service_account.tiller[0]: Refreshing state... [id=kube-system/tiller] | |
module.orc8r-app.kubernetes_namespace.monitoring: Refreshing state... [id=monitoring] | |
module.orc8r-app.kubernetes_cluster_role_binding.tiller[0]: Refreshing state... [id=tiller] | |
module.orc8r-app.kubernetes_secret.nms_certs[0]: Refreshing state... [id=orc8r/nms-certs] | |
module.orc8r-app.kubernetes_secret.fluentd_certs: Refreshing state... [id=orc8r/fluentd-certs] | |
module.orc8r-app.kubernetes_secret.orc8r_certs: Refreshing state... [id=orc8r/orc8r-certs] | |
module.orc8r-app.kubernetes_secret.orc8r_configs: Refreshing state... [id=orc8r/orc8r-configs] | |
module.orc8r-app.helm_release.elasticsearch_curator[0]: Refreshing state... [id=elasticsearch-curator] | |
module.orc8r-app.kubernetes_secret.artifactory: Refreshing state... [id=orc8r/artifactory] | |
module.orc8r-app.kubernetes_secret.orc8r_envdir: Refreshing state... [id=orc8r/orc8r-envdir] | |
module.orc8r-app.helm_release.fluentd[0]: Refreshing state... [id=fluentd] | |
module.orc8r-app.kubernetes_persistent_volume_claim.storage["grafanadashboards"]: Refreshing state... [id=orc8r/grafanadashboards] | |
module.orc8r-app.kubernetes_persistent_volume_claim.storage["promcfg"]: Refreshing state... [id=orc8r/promcfg] | |
module.orc8r-app.kubernetes_persistent_volume_claim.storage["promdata"]: Refreshing state... [id=orc8r/promdata] | |
module.orc8r-app.kubernetes_persistent_volume_claim.storage["grafanaproviders"]: Refreshing state... [id=orc8r/grafanaproviders] | |
module.orc8r-app.kubernetes_persistent_volume_claim.storage["openvpn"]: Refreshing state... [id=orc8r/openvpn] | |
module.orc8r-app.kubernetes_persistent_volume_claim.storage["grafanadata"]: Refreshing state... [id=orc8r/grafanadata] | |
module.orc8r-app.kubernetes_persistent_volume_claim.storage["grafanadatasources"]: Refreshing state... [id=orc8r/grafanadatasources] | |
module.orc8r-app.helm_release.lte-orc8r[0]: Refreshing state... [id=lte-orc8r] | |
module.orc8r-app.helm_release.orc8r: Refreshing state... [id=orc8r] | |
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: | |
+ create | |
~ update in-place | |
- destroy | |
-/+ destroy and then create replacement | |
+/- create replacement and then destroy | |
<= read (data resources) | |
Terraform will perform the following actions: | |
# module.orc8r.data.aws_iam_policy_document.es-management[0] will be read during apply | |
# (config refers to values not yet known) | |
<= data "aws_iam_policy_document" "es-management" { | |
+ id = (known after apply) | |
+ json = (known after apply) | |
+ statement { | |
+ actions = [ | |
+ "es:*", | |
] | |
+ effect = "Allow" | |
+ resources = [ | |
+ "arn:aws:es:eu-west-2:007606123670:domain/orc8r-es/*", | |
] | |
+ principals { | |
+ identifiers = [ | |
+ "*", | |
] | |
+ type = "AWS" | |
} | |
} | |
} | |
# module.orc8r.aws_db_event_subscription.default will be created | |
+ resource "aws_db_event_subscription" "default" { | |
+ arn = (known after apply) | |
+ customer_aws_id = (known after apply) | |
+ enabled = true | |
+ event_categories = [ | |
+ "failure", | |
+ "maintenance", | |
+ "notification", | |
+ "restoration", | |
] | |
+ id = (known after apply) | |
+ name = "orc8r-rds-events" | |
+ name_prefix = (known after apply) | |
+ sns_topic = (known after apply) | |
+ source_ids = [ | |
+ "orc8rdb", | |
] | |
+ source_type = "db-instance" | |
+ tags_all = (known after apply) | |
} | |
# module.orc8r.aws_db_instance.default will be updated in-place | |
~ resource "aws_db_instance" "default" { | |
+ allow_major_version_upgrade = true | |
~ backup_retention_period = 0 -> 7 | |
~ backup_window = "22:05-22:35" -> "01:00-01:30" | |
id = "orc8rdb" | |
name = "orc8r" | |
tags = {} | |
# (45 unchanged attributes hidden) | |
} | |
# module.orc8r.aws_elasticsearch_domain.es[0] will be updated in-place | |
~ resource "aws_elasticsearch_domain" "es" { | |
~ advanced_options = { | |
- "override_main_response_version" = "false" -> null | |
# (1 unchanged element hidden) | |
} | |
id = "arn:aws:es:eu-west-2:007606123670:domain/orc8r-es" | |
tags = {} | |
# (8 unchanged attributes hidden) | |
# (9 unchanged blocks hidden) | |
} | |
# module.orc8r.aws_elasticsearch_domain_policy.es_management_access[0] will be updated in-place | |
~ resource "aws_elasticsearch_domain_policy" "es_management_access" { | |
~ access_policies = jsonencode( | |
{ | |
- Statement = [ | |
- { | |
- Action = "es:*" | |
- Effect = "Allow" | |
- Principal = { | |
- AWS = "*" | |
} | |
- Resource = "arn:aws:es:eu-west-2:007606123670:domain/orc8r-es/*" | |
- Sid = "" | |
}, | |
] | |
- Version = "2012-10-17" | |
} | |
) -> (known after apply) | |
id = "esd-policy-orc8r-es" | |
# (1 unchanged attribute hidden) | |
} | |
# module.orc8r.aws_sns_topic.sns_orc8r_topic will be created | |
+ resource "aws_sns_topic" "sns_orc8r_topic" { | |
+ arn = (known after apply) | |
+ content_based_deduplication = false | |
+ fifo_topic = false | |
+ id = (known after apply) | |
+ name = "orc8r-sns" | |
+ name_prefix = (known after apply) | |
+ owner = (known after apply) | |
+ policy = (known after apply) | |
+ tags_all = (known after apply) | |
} | |
# module.orc8r-app.data.template_file.orc8r_values will be read during apply | |
# (config refers to values not yet known) | |
<= data "template_file" "orc8r_values" { | |
+ id = (known after apply) | |
+ rendered = (known after apply) | |
+ template = <<-EOT | |
################################################################################ | |
# Copyright 2020 The Magma Authors. | |
# This source code is licensed under the BSD-style license found in the | |
# LICENSE file in the root directory of this source tree. | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
################################################################################ | |
imagePullSecrets: | |
- name: ${image_pull_secret} | |
secrets: | |
create: false | |
secret: | |
certs: ${certs_secret} | |
configs: | |
orc8r: ${configs_secret} | |
envdir: ${envdir_secret} | |
nginx: | |
create: true | |
podDisruptionBudget: | |
enabled: true | |
image: | |
repository: ${docker_registry}/nginx | |
tag: "${docker_tag}" | |
replicas: ${nginx_replicas} | |
service: | |
enabled: true | |
legacyEnabled: true | |
annotations: | |
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "magma-uuid=${magma_uuid}" | |
extraAnnotations: | |
proxy: | |
external-dns.alpha.kubernetes.io/hostname: ${api_hostname} | |
bootstrapLagacy: | |
external-dns.alpha.kubernetes.io/hostname: bootstrapper-${controller_hostname} | |
clientcertLegacy: | |
external-dns.alpha.kubernetes.io/hostname: ${controller_hostname} | |
name: orc8r-bootstrap-nginx | |
type: LoadBalancer | |
spec: | |
hostname: ${controller_hostname} | |
controller: | |
podDisruptionBudget: | |
enabled: true | |
image: | |
repository: ${docker_registry}/controller | |
tag: "${docker_tag}" | |
replicas: ${controller_replicas} | |
spec: | |
database: | |
db: ${orc8r_db_name} | |
host: ${orc8r_db_host} | |
port: ${orc8r_db_port} | |
user: ${orc8r_db_user} | |
service_registry: | |
mode: "k8s" | |
metrics: | |
imagePullSecrets: | |
- name: ${image_pull_secret} | |
metrics: | |
volumes: | |
prometheusData: | |
volumeSpec: | |
persistentVolumeClaim: | |
claimName: ${metrics_pvc_promdata} | |
prometheusConfig: | |
volumeSpec: | |
persistentVolumeClaim: | |
claimName: ${metrics_pvc_promcfg} | |
prometheus: | |
create: true | |
includeOrc8rAlerts: true | |
prometheusCacheHostname: ${prometheus_cache_hostname} | |
alertmanagerHostname: ${alertmanager_hostname} | |
alertmanager: | |
create: true | |
prometheusConfigurer: | |
create: true | |
image: | |
repository: docker.io/facebookincubator/prometheus-configurer | |
tag: ${prometheus_configurer_version} | |
prometheusURL: ${prometheus_url} | |
alertmanagerConfigurer: | |
create: true | |
image: | |
repository: docker.io/facebookincubator/alertmanager-configurer | |
tag: ${alertmanager_configurer_version} | |
alertmanagerURL: ${alertmanager_url} | |
prometheusCache: | |
create: true | |
image: | |
repository: docker.io/facebookincubator/prometheus-edge-hub | |
tag: 1.1.0 | |
limit: 500000 | |
grafana: | |
create: false | |
userGrafana: | |
image: | |
repository: docker.io/grafana/grafana | |
tag: 6.6.2 | |
create: ${create_usergrafana} | |
volumes: | |
datasources: | |
volumeSpec: | |
persistentVolumeClaim: | |
claimName: ${grafana_pvc_grafanaDatasources} | |
dashboardproviders: | |
volumeSpec: | |
persistentVolumeClaim: | |
claimName: ${grafana_pvc_grafanaProviders} | |
dashboards: | |
volumeSpec: | |
persistentVolumeClaim: | |
claimName: ${grafana_pvc_grafanaDashboards} | |
grafanaData: | |
volumeSpec: | |
persistentVolumeClaim: | |
claimName: ${grafana_pvc_grafanaData} | |
thanos: | |
enabled: ${thanos_enabled} | |
compact: | |
nodeSelector: | |
${thanos_compact_selector} | |
store: | |
nodeSelector: | |
${thanos_store_selector} | |
query: | |
nodeSelector: | |
${thanos_query_selector} | |
objstore: | |
type: S3 | |
config: | |
bucket: ${thanos_bucket} | |
endpoint: s3.${region}.amazonaws.com | |
region: ${region} | |
access_key: ${thanos_aws_access_key} | |
secret_key: ${thanos_aws_secret_key} | |
insecure: false | |
signature_version2: false | |
put_user_metadata: {} | |
http_config: | |
idle_conn_timeout: 0s | |
response_header_timeout: 0s | |
insecure_skip_verify: false | |
trace: | |
enable: false | |
part_size: 0 | |
nms: | |
enabled: ${deploy_nms} | |
imagePullSecrets: | |
- name: ${image_pull_secret} | |
secret: | |
certs: ${nms_certs_secret} | |
magmalte: | |
create: true | |
image: | |
repository: ${docker_registry}/magmalte | |
tag: "${docker_tag}" | |
env: | |
api_host: ${api_hostname} | |
mysql_db: ${orc8r_db_name} | |
mysql_dialect: ${orc8r_db_dialect} | |
mysql_host: ${orc8r_db_host} | |
mysql_port: ${orc8r_db_port} | |
mysql_user: ${orc8r_db_user} | |
mysql_pass: ${orc8r_db_pass} | |
grafana_address: ${user_grafana_hostname} | |
nginx: | |
create: true | |
service: | |
type: LoadBalancer | |
annotations: | |
external-dns.alpha.kubernetes.io/hostname: "${nms_hostname}" | |
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "magma-uuid=${magma_uuid}" | |
deployment: | |
spec: | |
ssl_cert_name: controller.crt | |
ssl_cert_key_name: controller.key | |
logging: | |
enabled: false | |
EOT | |
+ vars = { | |
+ "alertmanager_configurer_version" = "1.0.4" | |
+ "alertmanager_hostname" = "orc8r-alertmanager" | |
+ "alertmanager_url" = "orc8r-alertmanager:9093" | |
+ "api_hostname" = "api.ens15to161.fbmagma.ninja" | |
+ "certs_secret" = "orc8r-certs" | |
+ "configs_secret" = "orc8r-configs" | |
+ "controller_hostname" = "controller.ens15to161.fbmagma.ninja" | |
+ "controller_replicas" = "2" | |
+ "create_usergrafana" = "true" | |
+ "deploy_nms" = "true" | |
+ "docker_registry" = "docker-test.artifactory.magmacore.org" | |
+ "docker_tag" = "1.6.1" | |
+ "envdir_secret" = "orc8r-envdir" | |
+ "grafana_pvc_grafanaDashboards" = "grafanadashboards" | |
+ "grafana_pvc_grafanaData" = "grafanadata" | |
+ "grafana_pvc_grafanaDatasources" = "grafanadatasources" | |
+ "grafana_pvc_grafanaProviders" = "grafanaproviders" | |
+ "image_pull_secret" = "artifactory" | |
+ "magma_uuid" = "default" | |
+ "metrics_pvc_promcfg" = "promcfg" | |
+ "metrics_pvc_promdata" = "promdata" | |
+ "nginx_replicas" = "2" | |
+ "nms_certs_secret" = "nms-certs" | |
+ "nms_hostname" = "*.nms.ens15to161.fbmagma.ninja" | |
+ "orc8r_db_dialect" = "postgres" | |
+ "orc8r_db_host" = "orc8rdb.czay9gi9jk5o.eu-west-2.rds.amazonaws.com" | |
+ "orc8r_db_name" = "orc8r" | |
+ "orc8r_db_pass" = (sensitive) | |
+ "orc8r_db_port" = "5432" | |
+ "orc8r_db_user" = "orc8r" | |
+ "prometheus_cache_hostname" = "orc8r-prometheus-cache" | |
+ "prometheus_configurer_version" = "1.0.4" | |
+ "prometheus_url" = "orc8r-prometheus:9090" | |
+ "region" = "eu-west-2" | |
+ "thanos_aws_access_key" = "" | |
+ "thanos_aws_secret_key" = "" | |
+ "thanos_bucket" = "" | |
+ "thanos_compact_selector" = jsonencode({}) | |
+ "thanos_enabled" = "false" | |
+ "thanos_query_selector" = "compute-type: thanos" | |
+ "thanos_store_selector" = jsonencode({}) | |
+ "user_grafana_hostname" = "orc8r-user-grafana:3000" | |
} | |
} | |
# module.orc8r-app.helm_release.elasticsearch_curator[0] will be updated in-place | |
~ resource "helm_release" "elasticsearch_curator" { | |
id = "elasticsearch-curator" | |
name = "elasticsearch-curator" | |
~ values = [ | |
- <<-EOT | |
configMaps: | |
config_yml: |- | |
--- | |
client: | |
hosts: | |
- "vpc-orc8r-es-m5o7naykfz4eg35dt4v36o7oga.eu-west-2.es.amazonaws.com" | |
port: "443" | |
use_ssl: "True" | |
logging: | |
loglevel: "INFO" | |
action_file_yml: |- | |
--- | |
actions: | |
1: | |
action: delete_indices | |
description: "Clean up ES by deleting old indices" | |
options: | |
timeout_override: | |
continue_if_exception: False | |
disable_action: False | |
ignore_empty_list: True | |
filters: | |
- filtertype: age | |
source: name | |
direction: older | |
timestring: '%Y.%m.%d' | |
unit: days | |
unit_count: 7 | |
field: | |
stats_result: | |
epoch: | |
exclude: False | |
EOT, | |
+ <<-EOT | |
cronjob: | |
schedule: "0 0 * * *" | |
annotations: {} | |
labels: {} | |
concurrencyPolicy: "" | |
failedJobsHistoryLimit: "" | |
successfulJobsHistoryLimit: "" | |
jobRestartPolicy: Never | |
configMaps: | |
config_yml: |- | |
--- | |
client: | |
hosts: | |
- "vpc-orc8r-es-m5o7naykfz4eg35dt4v36o7oga.eu-west-2.es.amazonaws.com" | |
port: "443" | |
use_ssl: "True" | |
logging: | |
loglevel: "INFO" | |
action_file_yml: |- | |
--- | |
actions: | |
1: | |
action: delete_indices | |
description: "Clean up ES by deleting old indices" | |
options: | |
timeout_override: | |
continue_if_exception: False | |
disable_action: False | |
ignore_empty_list: True | |
filters: | |
- filtertype: age | |
source: name | |
direction: older | |
timestring: '%Y.%m.%d' | |
unit: days | |
unit_count: 7 | |
field: | |
stats_result: | |
epoch: | |
exclude: False | |
2: | |
action: delete_indices | |
description: "Clean up ES by magma log indices if it consumes more than 75% of volume" | |
options: | |
timeout_override: | |
continue_if_exception: False | |
disable_action: False | |
ignore_empty_list: True | |
filters: | |
- filtertype: pattern | |
kind: prefix | |
value: magma- | |
- filtertype: space | |
disk_space: 10 | |
use_age: True | |
source: creation_date | |
EOT, | |
] | |
~ version = "2.1.3" -> "2.2.3" | |
# (24 unchanged attributes hidden) | |
} | |
# module.orc8r-app.helm_release.fluentd[0] will be updated in-place | |
~ resource "helm_release" "fluentd" { | |
id = "fluentd" | |
name = "fluentd" | |
~ values = [ | |
- <<-EOT | |
replicaCount: 2 | |
output: | |
host: vpc-orc8r-es-m5o7naykfz4eg35dt4v36o7oga.eu-west-2.es.amazonaws.com | |
port: 443 | |
scheme: https | |
rbac: | |
create: false | |
service: | |
annotations: | |
external-dns.alpha.kubernetes.io/hostname: fluentd.ens15to161.fbmagma.ninja | |
type: LoadBalancer | |
ports: | |
- name: "forward" | |
protocol: TCP | |
containerPort: 24224 | |
configMaps: | |
forward-input.conf: |- | |
<source> | |
@type forward | |
port 24224 | |
bind 0.0.0.0 | |
<transport tls> | |
ca_path /certs/certifier.pem | |
cert_path /certs/fluentd.pem | |
private_key_path /certs/fluentd.key | |
client_cert_auth true | |
</transport> | |
</source> | |
output.conf: |- | |
<match eventd> | |
@id eventd_elasticsearch | |
@type elasticsearch | |
@log_level info | |
include_tag_key true | |
host "#{ENV['OUTPUT_HOST']}" | |
port "#{ENV['OUTPUT_PORT']}" | |
scheme "#{ENV['OUTPUT_SCHEME']}" | |
ssl_version "#{ENV['OUTPUT_SSL_VERSION']}" | |
logstash_format true | |
logstash_prefix "eventd" | |
reconnect_on_error true | |
reload_on_failure true | |
reload_connections false | |
log_es_400_reason true | |
<buffer> | |
@type file | |
path /var/log/fluentd-buffers/eventd.kubernetes.system.buffer | |
flush_mode interval | |
retry_type exponential_backoff | |
flush_thread_count 2 | |
flush_interval 5s | |
retry_forever | |
retry_max_interval 30 | |
chunk_limit_size "#{ENV['OUTPUT_BUFFER_CHUNK_LIMIT']}" | |
queue_limit_length "#{ENV['OUTPUT_BUFFER_QUEUE_LIMIT']}" | |
overflow_action block | |
</buffer> | |
</match> | |
<match **> | |
@id elasticsearch | |
@type elasticsearch | |
@log_level info | |
include_tag_key true | |
host "#{ENV['OUTPUT_HOST']}" | |
port "#{ENV['OUTPUT_PORT']}" | |
scheme "#{ENV['OUTPUT_SCHEME']}" | |
ssl_version "#{ENV['OUTPUT_SSL_VERSION']}" | |
logstash_format true | |
logstash_prefix "magma" | |
reconnect_on_error true | |
reload_on_failure true | |
reload_connections false | |
log_es_400_reason true | |
<buffer> | |
@type file | |
path /var/log/fluentd-buffers/kubernetes.system.buffer | |
flush_mode interval | |
retry_type exponential_backoff | |
flush_thread_count 2 | |
flush_interval 5s | |
retry_forever | |
retry_max_interval 30 | |
chunk_limit_size "#{ENV['OUTPUT_BUFFER_CHUNK_LIMIT']}" | |
queue_limit_length "#{ENV['OUTPUT_BUFFER_QUEUE_LIMIT']}" | |
overflow_action block | |
</buffer> | |
</match> | |
extraVolumes: | |
- name: certs | |
secret: | |
defaultMode: 420 | |
secretName: fluentd-certs | |
extraVolumeMounts: | |
- name: certs | |
mountPath: /certs | |
readOnly: true | |
EOT, | |
+ <<-EOT | |
replicaCount: 2 | |
output: | |
host: vpc-orc8r-es-m5o7naykfz4eg35dt4v36o7oga.eu-west-2.es.amazonaws.com | |
port: 443 | |
scheme: https | |
rbac: | |
create: false | |
service: | |
annotations: | |
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "magma-uuid=default" | |
external-dns.alpha.kubernetes.io/hostname: fluentd.ens15to161.fbmagma.ninja | |
type: LoadBalancer | |
ports: | |
- name: "forward" | |
protocol: TCP | |
containerPort: 24224 | |
configMaps: | |
forward-input.conf: |- | |
<source> | |
@type forward | |
port 24224 | |
bind 0.0.0.0 | |
<transport tls> | |
ca_path /certs/certifier.pem | |
cert_path /certs/fluentd.pem | |
private_key_path /certs/fluentd.key | |
client_cert_auth true | |
</transport> | |
</source> | |
output.conf: |- | |
<match eventd> | |
@id eventd_elasticsearch | |
@type elasticsearch | |
@log_level info | |
include_tag_key true | |
host "#{ENV['OUTPUT_HOST']}" | |
port "#{ENV['OUTPUT_PORT']}" | |
scheme "#{ENV['OUTPUT_SCHEME']}" | |
ssl_version "#{ENV['OUTPUT_SSL_VERSION']}" | |
logstash_format true | |
logstash_prefix "eventd" | |
reconnect_on_error true | |
reload_on_failure true | |
reload_connections false | |
log_es_400_reason true | |
<buffer> | |
@type file | |
path /var/log/fluentd-buffers/eventd.kubernetes.system.buffer | |
flush_mode interval | |
retry_type exponential_backoff | |
flush_thread_count 2 | |
flush_interval 5s | |
retry_forever | |
retry_max_interval 30 | |
chunk_limit_size "#{ENV['OUTPUT_BUFFER_CHUNK_LIMIT']}" | |
queue_limit_length "#{ENV['OUTPUT_BUFFER_QUEUE_LIMIT']}" | |
overflow_action block | |
</buffer> | |
</match> | |
<match **> | |
@id elasticsearch | |
@type elasticsearch | |
@log_level info | |
include_tag_key true | |
host "#{ENV['OUTPUT_HOST']}" | |
port "#{ENV['OUTPUT_PORT']}" | |
scheme "#{ENV['OUTPUT_SCHEME']}" | |
ssl_version "#{ENV['OUTPUT_SSL_VERSION']}" | |
logstash_format true | |
logstash_prefix "magma" | |
reconnect_on_error true | |
reload_on_failure true | |
reload_connections false | |
log_es_400_reason true | |
<buffer> | |
@type file | |
path /var/log/fluentd-buffers/kubernetes.system.buffer | |
flush_mode interval | |
retry_type exponential_backoff | |
flush_thread_count 2 | |
flush_interval 5s | |
retry_forever | |
retry_max_interval 30 | |
chunk_limit_size "#{ENV['OUTPUT_BUFFER_CHUNK_LIMIT']}" | |
queue_limit_length "#{ENV['OUTPUT_BUFFER_QUEUE_LIMIT']}" | |
overflow_action block | |
</buffer> | |
</match> | |
extraVolumes: | |
- name: certs | |
secret: | |
defaultMode: 420 | |
secretName: fluentd-certs | |
extraVolumeMounts: | |
- name: certs | |
mountPath: /certs | |
readOnly: true | |
EOT, | |
] | |
# (25 unchanged attributes hidden) | |
} | |
# module.orc8r-app.helm_release.lte-orc8r[0] will be updated in-place | |
~ resource "helm_release" "lte-orc8r" { | |
id = "lte-orc8r" | |
name = "lte-orc8r" | |
~ repository = "https://docker.artifactory.magmacore.org/artifactory/helm" -> "https://docker-test.artifactory.magmacore.org/artifactory/helm" | |
~ values = [ | |
- <<-EOT | |
################################################################################ | |
# Copyright 2020 The Magma Authors. | |
# This source code is licensed under the BSD-style license found in the | |
# LICENSE file in the root directory of this source tree. | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
################################################################################ | |
imagePullSecrets: | |
- name: artifactory | |
secrets: | |
create: false | |
secret: | |
certs: orc8r-certs | |
configs: | |
orc8r: orc8r-configs | |
envdir: orc8r-envdir | |
nginx: | |
create: true | |
podDisruptionBudget: | |
enabled: true | |
image: | |
repository: docker.artifactory.magmacore.org/nginx | |
tag: "1.5.0" | |
replicas: 2 | |
service: | |
enabled: true | |
legacyEnabled: true | |
extraAnnotations: | |
proxy: | |
external-dns.alpha.kubernetes.io/hostname: api.ens15to161.fbmagma.ninja | |
bootstrapLagacy: | |
external-dns.alpha.kubernetes.io/hostname: bootstrapper-controller.ens15to161.fbmagma.ninja | |
clientcertLegacy: | |
external-dns.alpha.kubernetes.io/hostname: controller.ens15to161.fbmagma.ninja | |
name: orc8r-bootstrap-nginx | |
type: LoadBalancer | |
spec: | |
hostname: controller.ens15to161.fbmagma.ninja | |
controller: | |
podDisruptionBudget: | |
enabled: true | |
image: | |
repository: docker.artifactory.magmacore.org/controller | |
tag: "1.5.0" | |
replicas: 2 | |
spec: | |
database: | |
db: orc8r | |
host: orc8rdb.czay9gi9jk5o.eu-west-2.rds.amazonaws.com | |
port: 5432 | |
user: orc8r | |
service_registry: | |
mode: "k8s" | |
metrics: | |
imagePullSecrets: | |
- name: artifactory | |
metrics: | |
volumes: | |
prometheusData: | |
volumeSpec: | |
persistentVolumeClaim: | |
claimName: promdata | |
prometheusConfig: | |
volumeSpec: | |
persistentVolumeClaim: | |
claimName: promcfg | |
prometheus: | |
create: true | |
includeOrc8rAlerts: true | |
prometheusCacheHostname: orc8r-prometheus-cache | |
alertmanagerHostname: orc8r-alertmanager | |
alertmanager: | |
create: true | |
prometheusConfigurer: | |
create: true | |
image: | |
repository: docker.io/facebookincubator/prometheus-configurer | |
tag: 1.0.4 | |
prometheusURL: orc8r-prometheus:9090 | |
alertmanagerConfigurer: | |
create: true | |
image: | |
repository: docker.io/facebookincubator/alertmanager-configurer | |
tag: 1.0.4 | |
alertmanagerURL: orc8r-alertmanager:9093 | |
prometheusCache: | |
create: true | |
image: | |
repository: docker.io/facebookincubator/prometheus-edge-hub | |
tag: 1.1.0 | |
limit: 500000 | |
grafana: | |
create: false | |
userGrafana: | |
image: | |
repository: docker.io/grafana/grafana | |
tag: 6.6.2 | |
create: true | |
volumes: | |
datasources: | |
volumeSpec: | |
persistentVolumeClaim: | |
claimName: grafanadatasources | |
dashboardproviders: | |
volumeSpec: | |
persistentVolumeClaim: | |
claimName: grafanaproviders | |
dashboards: | |
volumeSpec: | |
persistentVolumeClaim: | |
claimName: grafanadashboards | |
grafanaData: | |
volumeSpec: | |
persistentVolumeClaim: | |
claimName: grafanadata | |
thanos: | |
enabled: false | |
compact: | |
nodeSelector: | |
{} | |
store: | |
nodeSelector: | |
{} | |
query: | |
nodeSelector: | |
compute-type: thanos | |
objstore: | |
type: S3 | |
config: | |
bucket: | |
endpoint: s3.eu-west-2.amazonaws.com | |
region: eu-west-2 | |
access_key: | |
secret_key: | |
insecure: false | |
signature_version2: false | |
put_user_metadata: {} | |
http_config: | |
idle_conn_timeout: 0s | |
response_header_timeout: 0s | |
insecure_skip_verify: false | |
trace: | |
enable: false | |
part_size: 0 | |
nms: | |
enabled: true | |
imagePullSecrets: | |
- name: artifactory | |
secret: | |
certs: nms-certs | |
magmalte: | |
create: true | |
image: | |
repository: docker.artifactory.magmacore.org/magmalte | |
tag: "1.5.0" | |
env: | |
api_host: api.ens15to161.fbmagma.ninja | |
mysql_db: orc8r | |
mysql_dialect: postgres | |
mysql_host: orc8rdb.czay9gi9jk5o.eu-west-2.rds.amazonaws.com | |
mysql_port: 5432 | |
mysql_user: orc8r | |
mysql_pass: testpassword | |
grafana_address: orc8r-user-grafana:3000 | |
nginx: | |
create: true | |
service: | |
type: LoadBalancer | |
annotations: | |
external-dns.alpha.kubernetes.io/hostname: "*.nms.ens15to161.fbmagma.ninja" | |
deployment: | |
spec: | |
ssl_cert_name: controller.crt | |
ssl_cert_key_name: controller.key | |
logging: | |
enabled: false | |
EOT, | |
] -> (known after apply) | |
~ version = "0.2.4" -> "0.2.5" | |
# (26 unchanged attributes hidden) | |
set_sensitive { | |
# At least one attribute in this block is (or was) sensitive, | |
# so its contents will not be displayed. | |
} | |
} | |
# module.orc8r-app.helm_release.orc8r will be updated in-place | |
~ resource "helm_release" "orc8r" { | |
id = "orc8r" | |
name = "orc8r" | |
~ repository = "https://docker.artifactory.magmacore.org/artifactory/helm" -> "https://docker-test.artifactory.magmacore.org/artifactory/helm" | |
~ values = [ | |
- <<-EOT | |
################################################################################ | |
# Copyright 2020 The Magma Authors. | |
# This source code is licensed under the BSD-style license found in the | |
# LICENSE file in the root directory of this source tree. | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
################################################################################ | |
imagePullSecrets: | |
- name: artifactory | |
secrets: | |
create: false | |
secret: | |
certs: orc8r-certs | |
configs: | |
orc8r: orc8r-configs | |
envdir: orc8r-envdir | |
nginx: | |
create: true | |
podDisruptionBudget: | |
enabled: true | |
image: | |
repository: docker.artifactory.magmacore.org/nginx | |
tag: "1.5.0" | |
replicas: 2 | |
service: | |
enabled: true | |
legacyEnabled: true | |
extraAnnotations: | |
proxy: | |
external-dns.alpha.kubernetes.io/hostname: api.ens15to161.fbmagma.ninja | |
bootstrapLagacy: | |
external-dns.alpha.kubernetes.io/hostname: bootstrapper-controller.ens15to161.fbmagma.ninja | |
clientcertLegacy: | |
external-dns.alpha.kubernetes.io/hostname: controller.ens15to161.fbmagma.ninja | |
name: orc8r-bootstrap-nginx | |
type: LoadBalancer | |
spec: | |
hostname: controller.ens15to161.fbmagma.ninja | |
controller: | |
podDisruptionBudget: | |
enabled: true | |
image: | |
repository: docker.artifactory.magmacore.org/controller | |
tag: "1.5.0" | |
replicas: 2 | |
spec: | |
database: | |
db: orc8r | |
host: orc8rdb.czay9gi9jk5o.eu-west-2.rds.amazonaws.com | |
port: 5432 | |
user: orc8r | |
service_registry: | |
mode: "k8s" | |
metrics: | |
imagePullSecrets: | |
- name: artifactory | |
metrics: | |
volumes: | |
prometheusData: | |
volumeSpec: | |
persistentVolumeClaim: | |
claimName: promdata | |
prometheusConfig: | |
volumeSpec: | |
persistentVolumeClaim: | |
claimName: promcfg | |
prometheus: | |
create: true | |
includeOrc8rAlerts: true | |
prometheusCacheHostname: orc8r-prometheus-cache | |
alertmanagerHostname: orc8r-alertmanager | |
alertmanager: | |
create: true | |
prometheusConfigurer: | |
create: true | |
image: | |
repository: docker.io/facebookincubator/prometheus-configurer | |
tag: 1.0.4 | |
prometheusURL: orc8r-prometheus:9090 | |
alertmanagerConfigurer: | |
create: true | |
image: | |
repository: docker.io/facebookincubator/alertmanager-configurer | |
tag: 1.0.4 | |
alertmanagerURL: orc8r-alertmanager:9093 | |
prometheusCache: | |
create: true | |
image: | |
repository: docker.io/facebookincubator/prometheus-edge-hub | |
tag: 1.1.0 | |
limit: 500000 | |
grafana: | |
create: false | |
userGrafana: | |
image: | |
repository: docker.io/grafana/grafana | |
tag: 6.6.2 | |
create: true | |
volumes: | |
datasources: | |
volumeSpec: | |
persistentVolumeClaim: | |
claimName: grafanadatasources | |
dashboardproviders: | |
volumeSpec: | |
persistentVolumeClaim: | |
claimName: grafanaproviders | |
dashboards: | |
volumeSpec: | |
persistentVolumeClaim: | |
claimName: grafanadashboards | |
grafanaData: | |
volumeSpec: | |
persistentVolumeClaim: | |
claimName: grafanadata | |
thanos: | |
enabled: false | |
compact: | |
nodeSelector: | |
{} | |
store: | |
nodeSelector: | |
{} | |
query: | |
nodeSelector: | |
compute-type: thanos | |
objstore: | |
type: S3 | |
config: | |
bucket: | |
endpoint: s3.eu-west-2.amazonaws.com | |
region: eu-west-2 | |
access_key: | |
secret_key: | |
insecure: false | |
signature_version2: false | |
put_user_metadata: {} | |
http_config: | |
idle_conn_timeout: 0s | |
response_header_timeout: 0s | |
insecure_skip_verify: false | |
trace: | |
enable: false | |
part_size: 0 | |
nms: | |
enabled: true | |
imagePullSecrets: | |
- name: artifactory | |
secret: | |
certs: nms-certs | |
magmalte: | |
create: true | |
image: | |
repository: docker.artifactory.magmacore.org/magmalte | |
tag: "1.5.0" | |
env: | |
api_host: api.ens15to161.fbmagma.ninja | |
mysql_db: orc8r | |
mysql_dialect: postgres | |
mysql_host: orc8rdb.czay9gi9jk5o.eu-west-2.rds.amazonaws.com | |
mysql_port: 5432 | |
mysql_user: orc8r | |
mysql_pass: testpassword | |
grafana_address: orc8r-user-grafana:3000 | |
nginx: | |
create: true | |
service: | |
type: LoadBalancer | |
annotations: | |
external-dns.alpha.kubernetes.io/hostname: "*.nms.ens15to161.fbmagma.ninja" | |
deployment: | |
spec: | |
ssl_cert_name: controller.crt | |
ssl_cert_key_name: controller.key | |
logging: | |
enabled: false | |
EOT, | |
] -> (known after apply) | |
~ version = "1.5.21" -> "1.5.23" | |
# (26 unchanged attributes hidden) | |
set_sensitive { | |
# At least one attribute in this block is (or was) sensitive, | |
# so its contents will not be displayed. | |
} | |
} | |
# module.orc8r-app.kubernetes_secret.artifactory will be updated in-place | |
~ resource "kubernetes_secret" "artifactory" { | |
~ data = (sensitive value) | |
id = "orc8r/artifactory" | |
# (1 unchanged attribute hidden) | |
# (1 unchanged block hidden) | |
} | |
# module.orc8r-app.kubernetes_secret.fluentd_certs will be updated in-place | |
~ resource "kubernetes_secret" "fluentd_certs" { | |
~ data = (sensitive value) | |
id = "orc8r/fluentd-certs" | |
# (1 unchanged attribute hidden) | |
# (1 unchanged block hidden) | |
} | |
# module.orc8r-app.kubernetes_secret.nms_certs[0] will be updated in-place | |
~ resource "kubernetes_secret" "nms_certs" { | |
~ data = (sensitive value) | |
id = "orc8r/nms-certs" | |
# (1 unchanged attribute hidden) | |
# (1 unchanged block hidden) | |
} | |
# module.orc8r-app.kubernetes_secret.orc8r_certs will be updated in-place | |
~ resource "kubernetes_secret" "orc8r_certs" { | |
~ data = (sensitive value) | |
id = "orc8r/orc8r-certs" | |
# (1 unchanged attribute hidden) | |
# (1 unchanged block hidden) | |
} | |
# module.orc8r.module.eks.data.http.wait_for_cluster[0] will be read during apply | |
# (config refers to values not yet known) | |
<= data "http" "wait_for_cluster" { | |
+ body = (known after apply) | |
+ ca_certificate = <<-EOT | |
-----BEGIN CERTIFICATE----- | |
MIICyDCCAbCgAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl | |
cm5ldGVzMB4XDTIxMTAxNDIwMzMwMFoXDTMxMTAxMjIwMzMwMFowFTETMBEGA1UE | |
AxMKa3ViZXJuZXRlczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMDX | |
CqJ/hy+TVjo80KGEjMOh8frnpJihEYUjqtp3tvqKJDZGMhhs64b/3PlxtKdsKYf9 | |
J+GD3U6BJsbWmo+ikSH6f+JFQYsI1vV7BbLIyTsRmHZZ936Yqg7IbM/mfm5tvhYS | |
RaDonab6SM/LgVwTolJhlsJtT9tgEojI1ozEHvba6t87Sd17ytWhT9qNE8dzQrt3 | |
BVjvbdfH796LOKcS4yk/nHdBFUqrveospDwAf442XoPaBOdOtqB5lYKQieOrXSAm | |
Kj6WBSmuABek3fyc8WjQ8CPhL9zm6pl6KWU098nX7YvudQY84Od9MJxrZtKBYOho | |
5mRSlRFKUxqV3t3aUVMCAwEAAaMjMCEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB | |
/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAMBSzTVksa5UhhIUjnmbuvZm6al9 | |
/kGAIcqNYhPHY80KQpd/VhJXVG+UCG9dyuWm1xW/h5yFuiLj7XCkeFf7DbT/kTlL | |
t0G98jZXazDX7QrcYeGFtuQ+6zuAVDHHd9bxqBudIEfrTB6wWB7JiMF7NpL+ZQD/ | |
XpG+dbGDEc7BNthXgvcMtVgtJLFC3eUfuO9qh6jAPn1ZYgGQ8yPWF+9cuhVUNYZB | |
/uKEzBmVFGzvqIWsL9uih1YIcHbVQg7f+l9KXURxVj9i9Qpk4fs/rcmy2fLUThRp | |
OGIniu73xLged9X7vhRjd2breY+cPw3R9CHrgYC/FAhuNIfFKrYVgzJAM+Q= | |
-----END CERTIFICATE----- | |
EOT | |
+ id = (known after apply) | |
+ response_headers = (known after apply) | |
+ timeout = 300 | |
+ url = "https://C01ACFD8774EC5F087F33318D32C5914.gr7.eu-west-2.eks.amazonaws.com/healthz" | |
} | |
# module.orc8r.module.eks.aws_autoscaling_group.workers[0] will be updated in-place | |
~ resource "aws_autoscaling_group" "workers" { | |
id = "orc8r-wg-120211014203626650900000015" | |
~ launch_configuration = "orc8r-wg-120211014203620138100000012" -> (known after apply) | |
name = "orc8r-wg-120211014203626650900000015" | |
~ tags = [ | |
- { | |
- "key" = "Name" | |
- "propagate_at_launch" = "true" | |
- "value" = "orc8r-wg-1-eks_asg" | |
}, | |
- { | |
- "key" = "k8s.io/cluster-autoscaler/disabled" | |
- "propagate_at_launch" = "false" | |
- "value" = "true" | |
}, | |
- { | |
- "key" = "k8s.io/cluster-autoscaler/node-template/resources/ephemeral-storage" | |
- "propagate_at_launch" = "false" | |
- "value" = "100Gi" | |
}, | |
- { | |
- "key" = "k8s.io/cluster-autoscaler/orc8r" | |
- "propagate_at_launch" = "false" | |
- "value" = "orc8r" | |
}, | |
- { | |
- "key" = "k8s.io/cluster/orc8r" | |
- "propagate_at_launch" = "true" | |
- "value" = "owned" | |
}, | |
- { | |
- "key" = "kubernetes.io/cluster/orc8r" | |
- "propagate_at_launch" = "true" | |
- "value" = "owned" | |
}, | |
] | |
# (23 unchanged attributes hidden) | |
+ tag { | |
+ key = "Name" | |
+ propagate_at_launch = true | |
+ value = "orc8r-wg-1-eks_asg" | |
} | |
+ tag { | |
+ key = "k8s.io/cluster/orc8r" | |
+ propagate_at_launch = true | |
+ value = "owned" | |
} | |
+ tag { | |
+ key = "kubernetes.io/cluster/orc8r" | |
+ propagate_at_launch = true | |
+ value = "owned" | |
} | |
} | |
# module.orc8r.module.eks.aws_iam_policy.cluster_elb_sl_role_creation[0] will be created | |
+ resource "aws_iam_policy" "cluster_elb_sl_role_creation" { | |
+ arn = (known after apply) | |
+ description = "Permissions for EKS to create AWSServiceRoleForElasticLoadBalancing service-linked role" | |
+ id = (known after apply) | |
+ name = (known after apply) | |
+ name_prefix = "orc8r-elb-sl-role-creation" | |
+ path = "/" | |
+ policy = jsonencode( | |
{ | |
+ Statement = [ | |
+ { | |
+ Action = [ | |
+ "ec2:DescribeInternetGateways", | |
+ "ec2:DescribeAddresses", | |
+ "ec2:DescribeAccountAttributes", | |
] | |
+ Effect = "Allow" | |
+ Resource = "*" | |
+ Sid = "" | |
}, | |
] | |
+ Version = "2012-10-17" | |
} | |
) | |
+ policy_id = (known after apply) | |
+ tags_all = (known after apply) | |
} | |
# module.orc8r.module.eks.aws_iam_policy.worker_autoscaling[0] will be destroyed | |
- resource "aws_iam_policy" "worker_autoscaling" { | |
- arn = "arn:aws:iam::007606123670:policy/eks-worker-autoscaling-orc8r2021101420361739920000000a" -> null | |
- description = "EKS worker node autoscaling policy for cluster orc8r" -> null | |
- id = "arn:aws:iam::007606123670:policy/eks-worker-autoscaling-orc8r2021101420361739920000000a" -> null | |
- name = "eks-worker-autoscaling-orc8r2021101420361739920000000a" -> null | |
- name_prefix = "eks-worker-autoscaling-orc8r" -> null | |
- path = "/" -> null | |
- policy = jsonencode( | |
{ | |
- Statement = [ | |
- { | |
- Action = [ | |
- "ec2:DescribeLaunchTemplateVersions", | |
- "autoscaling:DescribeTags", | |
- "autoscaling:DescribeLaunchConfigurations", | |
- "autoscaling:DescribeAutoScalingInstances", | |
- "autoscaling:DescribeAutoScalingGroups", | |
] | |
- Effect = "Allow" | |
- Resource = "*" | |
- Sid = "eksWorkerAutoscalingAll" | |
}, | |
- { | |
- Action = [ | |
- "autoscaling:UpdateAutoScalingGroup", | |
- "autoscaling:TerminateInstanceInAutoScalingGroup", | |
- "autoscaling:SetDesiredCapacity", | |
] | |
- Condition = { | |
- StringEquals = { | |
- autoscaling:ResourceTag/k8s.io/cluster-autoscaler/enabled = [ | |
- "true", | |
] | |
- autoscaling:ResourceTag/kubernetes.io/cluster/orc8r = [ | |
- "owned", | |
] | |
} | |
} | |
- Effect = "Allow" | |
- Resource = "*" | |
- Sid = "eksWorkerAutoscalingOwn" | |
}, | |
] | |
- Version = "2012-10-17" | |
} | |
) -> null | |
- policy_id = "ANPAQDRK4HSLIXZGPSW5B" -> null | |
- tags = {} -> null | |
- tags_all = {} -> null | |
} | |
# module.orc8r.module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSVPCResourceControllerPolicy[0] will be created | |
+ resource "aws_iam_role_policy_attachment" "cluster_AmazonEKSVPCResourceControllerPolicy" { | |
+ id = (known after apply) | |
+ policy_arn = "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController" | |
+ role = "orc8r20211014202558934600000003" | |
} | |
# module.orc8r.module.eks.aws_iam_role_policy_attachment.cluster_elb_sl_role_creation[0] will be created | |
+ resource "aws_iam_role_policy_attachment" "cluster_elb_sl_role_creation" { | |
+ id = (known after apply) | |
+ policy_arn = (known after apply) | |
+ role = "orc8r20211014202558934600000003" | |
} | |
# module.orc8r.module.eks.aws_iam_role_policy_attachment.workers_autoscaling[0] will be destroyed | |
- resource "aws_iam_role_policy_attachment" "workers_autoscaling" { | |
- id = "orc8r20211014203617347500000008-20211014203617780400000011" -> null | |
- policy_arn = "arn:aws:iam::007606123670:policy/eks-worker-autoscaling-orc8r2021101420361739920000000a" -> null | |
- role = "orc8r20211014203617347500000008" -> null | |
} | |
# module.orc8r.module.eks.aws_launch_configuration.workers[0] must be replaced | |
+/- resource "aws_launch_configuration" "workers" { | |
~ arn = "arn:aws:autoscaling:eu-west-2:007606123670:launchConfiguration:c15ef5fd-4266-481d-a464-818413cb5c4b:launchConfigurationName/orc8r-wg-120211014203620138100000012" -> (known after apply) | |
~ id = "orc8r-wg-120211014203620138100000012" -> (known after apply) | |
~ name = "orc8r-wg-120211014203620138100000012" -> (known after apply) | |
~ user_data_base64 = "IyEvYmluL2Jhc2ggLXhlCgojIEFsbG93IHVzZXIgc3VwcGxpZWQgcHJlIHVzZXJkYXRhIGNvZGUKCgojIEJvb3RzdHJhcCBhbmQgam9pbiB0aGUgY2x1c3RlcgovZXRjL2Vrcy9ib290c3RyYXAuc2ggLS1iNjQtY2x1c3Rlci1jYSAnTFMwdExTMUNSVWRKVGlCRFJWSlVTVVpKUTBGVVJTMHRMUzB0Q2sxSlNVTjVSRU5EUVdKRFowRjNTVUpCWjBsQ1FVUkJUa0puYTNGb2EybEhPWGN3UWtGUmMwWkJSRUZXVFZKTmQwVlJXVVJXVVZGRVJYZHdjbVJYU213S1kyMDFiR1JIVm5wTlFqUllSRlJKZUUxVVFYaE9SRWwzVFhwTmQwMUdiMWhFVkUxNFRWUkJlRTFxU1hkTmVrMTNUVVp2ZDBaVVJWUk5Ra1ZIUVRGVlJRcEJlRTFMWVROV2FWcFlTblZhV0ZKc1kzcERRMEZUU1hkRVVWbEtTMjlhU1doMlkwNUJVVVZDUWxGQlJHZG5SVkJCUkVORFFWRnZRMmRuUlVKQlRVUllDa054U2k5b2VTdFVWbXB2T0RCTFIwVnFUVTlvT0daeWJuQkthV2hGV1ZWcWNYUndNM1IyY1V0S1JGcEhUV2hvY3pZMFlpOHpVR3g0ZEV0a2MwdFpaamtLU2l0SFJETlZOa0pLYzJKWGJXOHJhV3RUU0RabUswcEdVVmx6U1RGMlZqZENZa3hKZVZSelVtMUlXbG81TXpaWmNXYzNTV0pOTDIxbWJUVjBkbWhaVXdwU1lVUnZibUZpTmxOTkwweG5WbmRVYjJ4S2FHeHpTblJVT1hSblJXOXFTVEZ2ZWtWSWRtSmhOblE0TjFOa01UZDVkRmRvVkRseFRrVTRaSHBSY25RekNrSldhblppWkdaSU56azJURTlMWTFNMGVXc3Zia2hrUWtaVmNYSjJaVzl6Y0VSM1FXWTBOREpZYjFCaFFrOWtUM1J4UWpWc1dVdFJhV1ZQY2xoVFFXMEtTMm8yVjBKVGJYVkJRbVZyTTJaNVl6aFhhbEU0UTFCb1REbDZiVFp3YkRaTFYxVXdPVGh1V0RkWmRuVmtVVms0TkU5a09VMUtlSEphZEV0Q1dVOW9id28xYlZKVGJGSkdTMVY0Y1ZZemRETmhWVlpOUTBGM1JVRkJZVTFxVFVORmQwUm5XVVJXVWpCUVFWRklMMEpCVVVSQlowdHJUVUU0UjBFeFZXUkZkMFZDQ2k5M1VVWk5RVTFDUVdZNGQwUlJXVXBMYjFwSmFIWmpUa0ZSUlV4Q1VVRkVaMmRGUWtGTlFsTjZWRlpyYzJFMVZXaG9TVlZxYm0xaWRYWmFiVFpoYkRrS0wydEhRVWxqY1U1WmFGQklXVGd3UzFGd1pDOVdhRXBZVmtjclZVTkhPV1I1ZFZkdE1YaFhMMmcxZVVaMWFVeHFOMWhEYTJWR1pqZEVZbFF2YTFSc1RBcDBNRWM1T0dwYVdHRjZSRmczVVhKaldXVkhSblIxVVNzMmVuVkJWa1JJU0dRNVluaHhRblZrU1VWbWNsUkNObmRYUWpkS2FVMUdOMDV3VEN0YVVVUXZDbGh3Unl0a1lrZEVSV00zUWs1MGFGaG5kbU5OZEZabmRFcE1Sa016WlZWbWRVODVjV2cyYWtGUWJqRmFXV2RIVVRoNVVGZEdLemxqZFdoV1ZVNVpXa0lLTDNWTFJYcENiVlpHUjNwMmNVbFhjMHc1ZFdsb01WbEpZMGhpVmxGbk4yWXJiRGxMV0ZWU2VGWnFPV2s1VVhCck5HWnpMM0pqYlhreVpreFZWR2hTY0FwUFIwbHVhWFUzTTNoTVoyVmtPVmczZG1oU2FtUXlZbkpsV1N0alVIY3pVamxEU0hKbldVTXZSa0ZvZFU1SlprWkxjbGxXWjNwS1FVMHJVVDBLTFMwdExTMUZUa1FnUTBWU1ZFbEdTVU5CVkVVdExTMHRMUW89JyAtLWFwaXNlcnZlci1lbmRwb2ludCAnaHR0cHM6Ly9DMDFBQ0ZEODc3NEVDNUYwODdGMzMzMThEMzJDNTkxNC5ncjcuZXUtd2VzdC0yLmVrcy5hbWF6b25hd3MuY29tJyAgLS1rdWJlbGV0LWV4dHJhLWFyZ3MgIiIgJ29yYzhyJwoKIyBBbGxvdyB1c2VyIHN1cHBsaWVkIHVzZXJkYXRhIGNvZGUKCg==" -> "IyEvYmluL2Jhc2ggLWUKCiMgQWxsb3cgdXNlciBzdXBwbGllZCBwcmUgdXNlcmRhdGEgY29kZQoKCiMgQm9vdHN0cmFwIGFuZCBqb2luIHRoZSBjbHVzdGVyCi9ldGMvZWtzL2Jvb3RzdHJhcC5zaCAtLWI2NC1jbHVzdGVyLWNhICdMUzB0TFMxQ1JVZEpUaUJEUlZKVVNVWkpRMEZVUlMwdExTMHRDazFKU1VONVJFTkRRV0pEWjBGM1NVSkJaMGxDUVVSQlRrSm5hM0ZvYTJsSE9YY3dRa0ZSYzBaQlJFRldUVkpOZDBWUldVUldVVkZFUlhkd2NtUlhTbXdLWTIwMWJHUkhWbnBOUWpSWVJGUkplRTFVUVhoT1JFbDNUWHBOZDAxR2IxaEVWRTE0VFZSQmVFMXFTWGROZWsxM1RVWnZkMFpVUlZSTlFrVkhRVEZWUlFwQmVFMUxZVE5XYVZwWVNuVmFXRkpzWTNwRFEwRlRTWGRFVVZsS1MyOWFTV2gyWTA1QlVVVkNRbEZCUkdkblJWQkJSRU5EUVZGdlEyZG5SVUpCVFVSWUNrTnhTaTlvZVN0VVZtcHZPREJMUjBWcVRVOW9PR1p5Ym5CS2FXaEZXVlZxY1hSd00zUjJjVXRLUkZwSFRXaG9jelkwWWk4elVHeDRkRXRrYzB0Wlpqa0tTaXRIUkROVk5rSktjMkpYYlc4cmFXdFRTRFptSzBwR1VWbHpTVEYyVmpkQ1lreEplVlJ6VW0xSVdsbzVNelpaY1djM1NXSk5MMjFtYlRWMGRtaFpVd3BTWVVSdmJtRmlObE5OTDB4blZuZFViMnhLYUd4elNuUlVPWFJuUlc5cVNURnZla1ZJZG1KaE5uUTROMU5rTVRkNWRGZG9WRGx4VGtVNFpIcFJjblF6Q2tKV2FuWmlaR1pJTnprMlRFOUxZMU0wZVdzdmJraGtRa1pWY1hKMlpXOXpjRVIzUVdZME5ESlliMUJoUWs5a1QzUnhRalZzV1V0UmFXVlBjbGhUUVcwS1MybzJWMEpUYlhWQlFtVnJNMlo1WXpoWGFsRTRRMUJvVERsNmJUWndiRFpMVjFVd09UaHVXRGRaZG5Wa1VWazRORTlrT1UxS2VISmFkRXRDV1U5b2J3bzFiVkpUYkZKR1MxVjRjVll6ZEROaFZWWk5RMEYzUlVGQllVMXFUVU5GZDBSbldVUldVakJRUVZGSUwwSkJVVVJCWjB0clRVRTRSMEV4VldSRmQwVkNDaTkzVVVaTlFVMUNRV1k0ZDBSUldVcExiMXBKYUhaalRrRlJSVXhDVVVGRVoyZEZRa0ZOUWxONlZGWnJjMkUxVldob1NWVnFibTFpZFhaYWJUWmhiRGtLTDJ0SFFVbGpjVTVaYUZCSVdUZ3dTMUZ3WkM5V2FFcFlWa2NyVlVOSE9XUjVkVmR0TVhoWEwyZzFlVVoxYVV4cU4xaERhMlZHWmpkRVlsUXZhMVJzVEFwME1FYzVPR3BhV0dGNlJGZzNVWEpqV1dWSFJuUjFVU3MyZW5WQlZrUklTR1E1WW5oeFFuVmtTVVZtY2xSQ05uZFhRamRLYVUxR04wNXdUQ3RhVVVRdkNsaHdSeXRrWWtkRVJXTTNRazUwYUZobmRtTk5kRlpuZEVwTVJrTXpaVlZtZFU4NWNXZzJha0ZRYmpGYVdXZEhVVGg1VUZkR0t6bGpkV2hXVlU1WldrSUtMM1ZMUlhwQ2JWWkdSM3AyY1VsWGMwdzVkV2xvTVZsSlkwaGlWbEZuTjJZcmJEbExXRlZTZUZacU9XazVVWEJyTkdaekwzSmpiWGt5Wmt4VlZHaFNjQXBQUjBsdWFYVTNNM2hNWjJWa09WZzNkbWhTYW1ReVluSmxXU3RqVUhjelVqbERTSEpuV1VNdlJrRm9kVTVKWmtaTGNsbFdaM3BLUVUwclVUMEtMUzB0TFMxRlRrUWdRMFZTVkVsR1NVTkJWRVV0TFMwdExRbz0nIC0tYXBpc2VydmVyLWVuZHBvaW50ICdodHRwczovL0MwMUFDRkQ4Nzc0RUM1RjA4N0YzMzMxOEQzMkM1OTE0LmdyNy5ldS13ZXN0LTIuZWtzLmFtYXpvbmF3cy5jb20nICAtLWt1YmVsZXQtZXh0cmEtYXJncyAiIiAnb3JjOHInCgojIEFsbG93IHVzZXIgc3VwcGxpZWQgdXNlcmRhdGEgY29kZQoK" # forces replacement | |
- vpc_classic_link_security_groups = [] -> null | |
# (9 unchanged attributes hidden) | |
+ ebs_block_device { | |
+ delete_on_termination = (known after apply) | |
+ device_name = (known after apply) | |
+ encrypted = (known after apply) | |
+ iops = (known after apply) | |
+ no_device = (known after apply) | |
+ snapshot_id = (known after apply) | |
+ throughput = (known after apply) | |
+ volume_size = (known after apply) | |
+ volume_type = (known after apply) | |
} | |
+ metadata_options { # forces replacement | |
+ http_endpoint = "enabled" # forces replacement | |
+ http_put_response_hop_limit = (known after apply) | |
+ http_tokens = "optional" # forces replacement | |
} | |
~ root_block_device { | |
~ throughput = 0 -> (known after apply) | |
# (5 unchanged attributes hidden) | |
} | |
} | |
# module.orc8r.module.eks.kubernetes_config_map.aws_auth[0] will be updated in-place | |
~ resource "kubernetes_config_map" "aws_auth" { | |
~ data = { | |
~ "mapRoles" = <<-EOT | |
- - rolearn: arn:aws:iam::007606123670:role/orc8r20211014203617347500000008 | |
- username: system:node:{{EC2PrivateDNSName}} | |
- groups: | |
- - system:bootstrappers | |
- - system:nodes | |
- | |
- | |
+ - "groups": | |
+ - "system:bootstrappers" | |
+ - "system:nodes" | |
+ "rolearn": "arn:aws:iam::007606123670:role/orc8r20211014203617347500000008" | |
+ "username": "system:node:{{EC2PrivateDNSName}}" | |
EOT | |
# (2 unchanged elements hidden) | |
} | |
id = "kube-system/aws-auth" | |
# (1 unchanged attribute hidden) | |
~ metadata { | |
~ labels = { | |
+ "app.kubernetes.io/managed-by" = "Terraform" | |
+ "terraform.io/module" = "terraform-aws-modules.eks.aws" | |
} | |
name = "aws-auth" | |
# (6 unchanged attributes hidden) | |
} | |
} | |
# module.orc8r.module.eks.local_file.kubeconfig[0] must be replaced | |
-/+ resource "local_file" "kubeconfig" { | |
~ content = <<-EOT # forces replacement | |
apiVersion: v1 | |
preferences: {} | |
kind: Config | |
clusters: | |
- cluster: | |
server: https://C01ACFD8774EC5F087F33318D32C5914.gr7.eu-west-2.eks.amazonaws.com | |
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1UQXhOREl3TXpNd01Gb1hEVE14TVRBeE1qSXdNek13TUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTURYCkNxSi9oeStUVmpvODBLR0VqTU9oOGZybnBKaWhFWVVqcXRwM3R2cUtKRFpHTWhoczY0Yi8zUGx4dEtkc0tZZjkKSitHRDNVNkJKc2JXbW8raWtTSDZmK0pGUVlzSTF2VjdCYkxJeVRzUm1IWlo5MzZZcWc3SWJNL21mbTV0dmhZUwpSYURvbmFiNlNNL0xnVndUb2xKaGxzSnRUOXRnRW9qSTFvekVIdmJhNnQ4N1NkMTd5dFdoVDlxTkU4ZHpRcnQzCkJWanZiZGZINzk2TE9LY1M0eWsvbkhkQkZVcXJ2ZW9zcER3QWY0NDJYb1BhQk9kT3RxQjVsWUtRaWVPclhTQW0KS2o2V0JTbXVBQmVrM2Z5YzhXalE4Q1BoTDl6bTZwbDZLV1UwOThuWDdZdnVkUVk4NE9kOU1KeHJadEtCWU9obwo1bVJTbFJGS1V4cVYzdDNhVVZNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFNQlN6VFZrc2E1VWhoSVVqbm1idXZabTZhbDkKL2tHQUljcU5ZaFBIWTgwS1FwZC9WaEpYVkcrVUNHOWR5dVdtMXhXL2g1eUZ1aUxqN1hDa2VGZjdEYlQva1RsTAp0MEc5OGpaWGF6RFg3UXJjWWVHRnR1USs2enVBVkRISGQ5YnhxQnVkSUVmclRCNndXQjdKaU1GN05wTCtaUUQvClhwRytkYkdERWM3Qk50aFhndmNNdFZndEpMRkMzZVVmdU85cWg2akFQbjFaWWdHUTh5UFdGKzljdWhWVU5ZWkIKL3VLRXpCbVZGR3p2cUlXc0w5dWloMVlJY0hiVlFnN2YrbDlLWFVSeFZqOWk5UXBrNGZzL3JjbXkyZkxVVGhScApPR0luaXU3M3hMZ2VkOVg3dmhSamQyYnJlWStjUHczUjlDSHJnWUMvRkFodU5JZkZLcllWZ3pKQU0rUT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= | |
name: eks_orc8r | |
contexts: | |
- context: | |
cluster: eks_orc8r | |
user: eks_orc8r | |
name: eks_orc8r | |
current-context: eks_orc8r | |
users: | |
- name: eks_orc8r | |
user: | |
exec: | |
apiVersion: client.authentication.k8s.io/v1alpha1 | |
command: aws-iam-authenticator | |
args: | |
- "token" | |
- "-i" | |
- "orc8r" | |
- | |
- | |
EOT | |
~ directory_permission = "0777" -> "0755" # forces replacement | |
~ file_permission = "0777" -> "0600" # forces replacement | |
~ id = "89df0f64e1d41b7520de987be686742c6d332b6f" -> (known after apply) | |
# (1 unchanged attribute hidden) | |
} | |
# module.orc8r.module.eks.null_resource.wait_for_cluster[0] will be destroyed | |
- resource "null_resource" "wait_for_cluster" { | |
- id = "4351803845162643018" -> null | |
} | |
# module.orc8r.module.eks.random_pet.workers[0] will be destroyed | |
- resource "random_pet" "workers" { | |
- id = "fleet-wren" -> null | |
- keepers = { | |
- "lc_name" = "orc8r-wg-120211014203620138100000012" | |
} -> null | |
- length = 2 -> null | |
- separator = "-" -> null | |
} | |
Plan: 7 to add, 13 to change, 6 to destroy. | |
╷ | |
│ Warning: Version constraints inside provider configuration blocks are deprecated | |
│ | |
│ on .terraform/modules/orc8r/orc8r/cloud/deploy/terraform/orc8r-aws/providers.tf line 19, in provider "random": | |
│ 19: version = "~> 2.1" | |
│ | |
│ Terraform 0.13 and earlier allowed provider version constraints inside the provider configuration block, but that is now deprecated and will be removed in a future | |
│ version of Terraform. To silence this warning, move the provider version constraint into the required_providers block. | |
╵ | |
Do you want to perform these actions? | |
Terraform will perform the actions described above. | |
Only 'yes' will be accepted to approve. | |
Enter a value: yes | |
module.orc8r.module.eks.local_file.kubeconfig[0]: Destroying... [id=89df0f64e1d41b7520de987be686742c6d332b6f] | |
module.orc8r.module.eks.local_file.kubeconfig[0]: Destruction complete after 0s | |
module.orc8r-app.kubernetes_secret.artifactory: Modifying... [id=orc8r/artifactory] | |
module.orc8r-app.kubernetes_secret.artifactory: Modifications complete after 0s [id=orc8r/artifactory] | |
module.orc8r.module.eks.data.http.wait_for_cluster[0]: Reading... | |
module.orc8r.module.eks.local_file.kubeconfig[0]: Creating... | |
module.orc8r.module.eks.aws_iam_role_policy_attachment.workers_autoscaling[0]: Destroying... [id=orc8r20211014203617347500000008-20211014203617780400000011] | |
module.orc8r.module.eks.local_file.kubeconfig[0]: Creation complete after 0s [id=be94c6ff828413c32a14e845263db47f0456c7eb] | |
module.orc8r-app.kubernetes_secret.fluentd_certs: Modifying... [id=orc8r/fluentd-certs] | |
module.orc8r.module.eks.aws_iam_policy.cluster_elb_sl_role_creation[0]: Creating... | |
module.orc8r.aws_sns_topic.sns_orc8r_topic: Creating... | |
module.orc8r.module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSVPCResourceControllerPolicy[0]: Creating... | |
module.orc8r-app.kubernetes_secret.orc8r_certs: Modifying... [id=orc8r/orc8r-certs] | |
module.orc8r-app.kubernetes_secret.nms_certs[0]: Modifying... [id=orc8r/nms-certs] | |
module.orc8r-app.kubernetes_secret.fluentd_certs: Modifications complete after 0s [id=orc8r/fluentd-certs] | |
module.orc8r.module.eks.aws_iam_role_policy_attachment.workers_autoscaling[0]: Destruction complete after 0s | |
module.orc8r.module.eks.data.http.wait_for_cluster[0]: Read complete after 1s [id=https://C01ACFD8774EC5F087F33318D32C5914.gr7.eu-west-2.eks.amazonaws.com/healthz] | |
module.orc8r-app.kubernetes_secret.nms_certs[0]: Modifications complete after 0s [id=orc8r/nms-certs] | |
module.orc8r-app.kubernetes_secret.orc8r_certs: Modifications complete after 0s [id=orc8r/orc8r-certs] | |
module.orc8r.module.eks.aws_iam_policy.worker_autoscaling[0]: Destroying... [id=arn:aws:iam::007606123670:policy/eks-worker-autoscaling-orc8r2021101420361739920000000a] | |
module.orc8r.aws_db_instance.default: Modifying... [id=orc8rdb] | |
module.orc8r.module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSVPCResourceControllerPolicy[0]: Creation complete after 0s [id=orc8r20211014202558934600000003-20211014214451776100000002] | |
module.orc8r.module.eks.aws_iam_policy.cluster_elb_sl_role_creation[0]: Creation complete after 0s [id=arn:aws:iam::007606123670:policy/orc8r-elb-sl-role-creation20211014214451671300000001] | |
module.orc8r.module.eks.aws_launch_configuration.workers[0]: Creating... | |
module.orc8r.module.eks.aws_iam_role_policy_attachment.cluster_elb_sl_role_creation[0]: Creating... | |
module.orc8r.aws_elasticsearch_domain.es[0]: Modifying... [id=arn:aws:es:eu-west-2:007606123670:domain/orc8r-es] | |
module.orc8r.module.eks.kubernetes_config_map.aws_auth[0]: Modifying... [id=kube-system/aws-auth] | |
module.orc8r.module.eks.aws_iam_policy.worker_autoscaling[0]: Destruction complete after 0s | |
module.orc8r.module.eks.aws_iam_role_policy_attachment.cluster_elb_sl_role_creation[0]: Creation complete after 0s [id=orc8r20211014202558934600000003-20211014214452164400000004] | |
module.orc8r.module.eks.kubernetes_config_map.aws_auth[0]: Modifications complete after 0s [id=kube-system/aws-auth] | |
module.orc8r.module.eks.null_resource.wait_for_cluster[0]: Destroying... [id=4351803845162643018] | |
module.orc8r.module.eks.null_resource.wait_for_cluster[0]: Destruction complete after 0s | |
module.orc8r.aws_sns_topic.sns_orc8r_topic: Creation complete after 1s [id=arn:aws:sns:eu-west-2:007606123670:orc8r-sns] | |
module.orc8r.module.eks.aws_launch_configuration.workers[0]: Creation complete after 2s [id=orc8r-wg-120211014214452085900000003] | |
module.orc8r.module.eks.aws_autoscaling_group.workers[0]: Modifying... [id=orc8r-wg-120211014203626650900000015] | |
module.orc8r.aws_elasticsearch_domain.es[0]: Modifications complete after 2s [id=arn:aws:es:eu-west-2:007606123670:domain/orc8r-es] | |
module.orc8r.data.aws_iam_policy_document.es-management[0]: Reading... | |
module.orc8r.data.aws_iam_policy_document.es-management[0]: Read complete after 0s [id=295713799] | |
module.orc8r.module.eks.aws_autoscaling_group.workers[0]: Modifications complete after 2s [id=orc8r-wg-120211014203626650900000015] | |
module.orc8r.module.eks.random_pet.workers[0]: Destroying... [id=fleet-wren] | |
module.orc8r.module.eks.random_pet.workers[0]: Destruction complete after 0s | |
module.orc8r.module.eks.aws_launch_configuration.workers[0] (c7c5549d): Destroying... [id=orc8r-wg-120211014203620138100000012] | |
module.orc8r.module.eks.aws_launch_configuration.workers[0]: Destruction complete after 0s | |
module.orc8r-app.helm_release.fluentd[0]: Modifying... [id=fluentd] | |
module.orc8r.aws_db_instance.default: Still modifying... [id=orc8rdb, 10s elapsed] | |
module.orc8r-app.helm_release.elasticsearch_curator[0]: Modifying... [id=elasticsearch-curator] | |
module.orc8r-app.helm_release.fluentd[0]: Still modifying... [id=fluentd, 10s elapsed] | |
module.orc8r.aws_db_instance.default: Still modifying... [id=orc8rdb, 20s elapsed] | |
module.orc8r-app.helm_release.elasticsearch_curator[0]: Still modifying... [id=elasticsearch-curator, 10s elapsed] | |
module.orc8r-app.helm_release.fluentd[0]: Modifications complete after 15s [id=fluentd] | |
module.orc8r-app.helm_release.elasticsearch_curator[0]: Modifications complete after 13s [id=elasticsearch-curator] | |
module.orc8r.aws_db_instance.default: Still modifying... [id=orc8rdb, 30s elapsed] | |
module.orc8r.aws_db_instance.default: Modifications complete after 32s [id=orc8rdb] | |
module.orc8r-app.data.template_file.orc8r_values: Reading... | |
module.orc8r.aws_db_event_subscription.default: Creating... | |
module.orc8r-app.data.template_file.orc8r_values: Read complete after 1s [id=3041040325926a4ee5c0653e7a9d42fe235893d80315528d4a86970570ee7b62] | |
module.orc8r-app.helm_release.orc8r: Modifying... [id=orc8r] | |
module.orc8r-app.helm_release.lte-orc8r[0]: Modifying... [id=lte-orc8r] | |
module.orc8r.aws_db_event_subscription.default: Still creating... [10s elapsed] | |
module.orc8r-app.helm_release.orc8r: Still modifying... [id=orc8r, 10s elapsed] | |
module.orc8r-app.helm_release.lte-orc8r[0]: Still modifying... [id=lte-orc8r, 10s elapsed] | |
module.orc8r.aws_db_event_subscription.default: Still creating... [20s elapsed] | |
module.orc8r-app.helm_release.orc8r: Still modifying... [id=orc8r, 20s elapsed] | |
module.orc8r-app.helm_release.lte-orc8r[0]: Still modifying... [id=lte-orc8r, 20s elapsed] | |
module.orc8r.aws_db_event_subscription.default: Still creating... [30s elapsed] | |
module.orc8r-app.helm_release.orc8r: Still modifying... [id=orc8r, 30s elapsed] | |
module.orc8r.aws_db_event_subscription.default: Creation complete after 31s [id=orc8r-rds-events] | |
module.orc8r-app.helm_release.lte-orc8r[0]: Still modifying... [id=lte-orc8r, 30s elapsed] | |
module.orc8r-app.helm_release.orc8r: Still modifying... [id=orc8r, 40s elapsed] | |
module.orc8r-app.helm_release.lte-orc8r[0]: Still modifying... [id=lte-orc8r, 40s elapsed] | |
module.orc8r-app.helm_release.orc8r: Still modifying... [id=orc8r, 50s elapsed] | |
module.orc8r-app.helm_release.lte-orc8r[0]: Still modifying... [id=lte-orc8r, 50s elapsed] | |
module.orc8r-app.helm_release.orc8r: Still modifying... [id=orc8r, 1m0s elapsed] | |
module.orc8r-app.helm_release.lte-orc8r[0]: Still modifying... [id=lte-orc8r, 1m0s elapsed] | |
module.orc8r-app.helm_release.orc8r: Still modifying... [id=orc8r, 1m10s elapsed] | |
module.orc8r-app.helm_release.lte-orc8r[0]: Still modifying... [id=lte-orc8r, 1m10s elapsed] | |
module.orc8r-app.helm_release.orc8r: Still modifying... [id=orc8r, 1m20s elapsed] | |
module.orc8r-app.helm_release.lte-orc8r[0]: Still modifying... [id=lte-orc8r, 1m20s elapsed] | |
module.orc8r-app.helm_release.orc8r: Still modifying... [id=orc8r, 1m30s elapsed] | |
module.orc8r-app.helm_release.lte-orc8r[0]: Still modifying... [id=lte-orc8r, 1m30s elapsed] | |
module.orc8r-app.helm_release.orc8r: Still modifying... [id=orc8r, 1m40s elapsed] | |
module.orc8r-app.helm_release.lte-orc8r[0]: Still modifying... [id=lte-orc8r, 1m40s elapsed] | |
module.orc8r-app.helm_release.orc8r: Still modifying... [id=orc8r, 1m50s elapsed] | |
module.orc8r-app.helm_release.lte-orc8r[0]: Still modifying... [id=lte-orc8r, 1m50s elapsed] | |
module.orc8r-app.helm_release.orc8r: Still modifying... [id=orc8r, 2m0s elapsed] | |
module.orc8r-app.helm_release.lte-orc8r[0]: Still modifying... [id=lte-orc8r, 2m0s elapsed] | |
module.orc8r-app.helm_release.orc8r: Still modifying... [id=orc8r, 2m10s elapsed] | |
module.orc8r-app.helm_release.lte-orc8r[0]: Still modifying... [id=lte-orc8r, 2m10s elapsed] | |
module.orc8r-app.helm_release.orc8r: Still modifying... [id=orc8r, 2m20s elapsed] | |
module.orc8r-app.helm_release.lte-orc8r[0]: Still modifying... [id=lte-orc8r, 2m20s elapsed] | |
module.orc8r-app.helm_release.orc8r: Still modifying... [id=orc8r, 2m30s elapsed] | |
module.orc8r-app.helm_release.lte-orc8r[0]: Still modifying... [id=lte-orc8r, 2m30s elapsed] | |
module.orc8r-app.helm_release.orc8r: Still modifying... [id=orc8r, 2m40s elapsed] | |
module.orc8r-app.helm_release.lte-orc8r[0]: Still modifying... [id=lte-orc8r, 2m40s elapsed] | |
module.orc8r-app.helm_release.lte-orc8r[0]: Modifications complete after 2m42s [id=lte-orc8r] | |
module.orc8r-app.helm_release.orc8r: Still modifying... [id=orc8r, 2m50s elapsed] | |
module.orc8r-app.helm_release.orc8r: Modifications complete after 2m59s [id=orc8r] | |
Apply complete! Resources: 7 added, 12 changed, 6 destroyed. | |
Outputs: | |
nameservers = tolist([ | |
"ns-1064.awsdns-05.org", | |
"ns-1541.awsdns-00.co.uk", | |
"ns-161.awsdns-20.com", | |
"ns-871.awsdns-44.net", | |
]) | |
cloudstrapper:~/magma-experimental/ens15to161/terraform #kubectl -n orc8r get deployments -o wide | |
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR | |
fluentd 2/2 2 2 64m fluentd gcr.io/google-containers/fluentd-elasticsearch:v2.4.0 app=fluentd,release=fluentd | |
nms-magmalte 1/1 1 1 64m nms-app docker-test.artifactory.magmacore.org/magmalte:1.6.1 app.kubernetes.io/component=magmalte,app.kubernetes.io/instance=orc8r,app.kubernetes.io/name=nms,release_group=orc8r | |
nms-nginx-proxy 1/1 1 1 64m nms-nginx nginx:latest app.kubernetes.io/component=nginx,app.kubernetes.io/instance=orc8r,app.kubernetes.io/name=nms,release_group=orc8r | |
orc8r-accessd 2/2 2 2 64m accessd docker-test.artifactory.magmacore.org/controller:1.6.1 app.kubernetes.io/component=accessd,app.kubernetes.io/instance=orc8r,app.kubernetes.io/name=orc8r | |
orc8r-alertmanager 1/1 1 1 64m alertmanager docker.io/prom/alertmanager:v0.18.0 app.kubernetes.io/component=alertmanager,app.kubernetes.io/instance=orc8r,app.kubernetes.io/name=metrics | |
orc8r-alertmanager-configurer 1/1 1 1 64m alertmanager-configurer docker.io/facebookincubator/alertmanager-configurer:1.0.4 app.kubernetes.io/component=alertmanager-configurer,app.kubernetes.io/instance=orc8r,app.kubernetes.io/name=metrics | |
orc8r-analytics 2/2 2 2 64m analytics docker-test.artifactory.magmacore.org/controller:1.6.1 app.kubernetes.io/component=analytics,app.kubernetes.io/instance=orc8r,app.kubernetes.io/name=orc8r | |
orc8r-bootstrapper 2/2 2 2 64m bootstrapper docker-test.artifactory.magmacore.org/controller:1.6.1 app.kubernetes.io/component=bootstrapper,app.kubernetes.io/instance=orc8r,app.kubernetes.io/name=orc8r | |
orc8r-certifier 2/2 2 2 64m certifier docker-test.artifactory.magmacore.org/controller:1.6.1 app.kubernetes.io/component=certifier,app.kubernetes.io/instance=orc8r,app.kubernetes.io/name=orc8r | |
orc8r-configurator 2/2 2 2 64m configurator docker-test.artifactory.magmacore.org/controller:1.6.1 app.kubernetes.io/component=configurator,app.kubernetes.io/instance=orc8r,app.kubernetes.io/name=orc8r | |
orc8r-ctraced 2/2 2 2 64m ctraced docker-test.artifactory.magmacore.org/controller:1.6.1 app.kubernetes.io/component=ctraced,app.kubernetes.io/instance=orc8r,app.kubernetes.io/name=orc8r | |
orc8r-device 2/2 2 2 64m device docker-test.artifactory.magmacore.org/controller:1.6.1 app.kubernetes.io/component=device,app.kubernetes.io/instance=orc8r,app.kubernetes.io/name=orc8r | |
orc8r-directoryd 2/2 2 2 64m directoryd docker-test.artifactory.magmacore.org/controller:1.6.1 app.kubernetes.io/component=directoryd,app.kubernetes.io/instance=orc8r,app.kubernetes.io/name=orc8r | |
orc8r-dispatcher 2/2 2 2 64m dispatcher docker-test.artifactory.magmacore.org/controller:1.6.1 app.kubernetes.io/component=dispatcher,app.kubernetes.io/instance=orc8r,app.kubernetes.io/name=orc8r | |
orc8r-eventd 2/2 2 2 64m eventd docker-test.artifactory.magmacore.org/controller:1.6.1 app.kubernetes.io/component=eventd,app.kubernetes.io/instance=orc8r,app.kubernetes.io/name=orc8r | |
orc8r-ha 2/2 2 2 64m ha docker-test.artifactory.magmacore.org/controller:1.6.1 app.kubernetes.io/component=ha,app.kubernetes.io/instance=lte-orc8r,app.kubernetes.io/name=lte-orc8r | |
orc8r-lte 2/2 2 2 64m lte docker-test.artifactory.magmacore.org/controller:1.6.1 app.kubernetes.io/component=lte,app.kubernetes.io/instance=lte-orc8r,app.kubernetes.io/name=lte-orc8r | |
orc8r-metricsd 2/2 2 2 64m metricsd docker-test.artifactory.magmacore.org/controller:1.6.1 app.kubernetes.io/component=metricsd,app.kubernetes.io/instance=orc8r,app.kubernetes.io/name=orc8r | |
orc8r-nginx 2/2 2 2 64m orc8r-nginx docker-test.artifactory.magmacore.org/nginx:1.6.1 app.kubernetes.io/component=nginx-proxy | |
orc8r-obsidian 2/2 2 2 64m obsidian docker-test.artifactory.magmacore.org/controller:1.6.1 app.kubernetes.io/component=obsidian,app.kubernetes.io/instance=orc8r,app.kubernetes.io/name=orc8r | |
orc8r-orchestrator 2/2 2 2 64m orchestrator docker-test.artifactory.magmacore.org/controller:1.6.1 app.kubernetes.io/component=orchestrator,app.kubernetes.io/instance=orc8r,app.kubernetes.io/name=orc8r | |
orc8r-policydb 2/2 2 2 64m policydb docker-test.artifactory.magmacore.org/controller:1.6.1 app.kubernetes.io/component=policydb,app.kubernetes.io/instance=lte-orc8r,app.kubernetes.io/name=lte-orc8r | |
orc8r-prometheus 1/1 1 1 64m prometheus docker.io/prom/prometheus:v2.27.1 app.kubernetes.io/component=prometheus,app.kubernetes.io/instance=orc8r,app.kubernetes.io/name=metrics | |
orc8r-prometheus-cache 1/1 1 1 64m prometheus-cache docker.io/facebookincubator/prometheus-edge-hub:1.1.0 app.kubernetes.io/component=prometheus-cache,app.kubernetes.io/instance=orc8r,app.kubernetes.io/name=metrics | |
orc8r-prometheus-configurer 1/1 1 1 64m prometheus-configurer docker.io/facebookincubator/prometheus-configurer:1.0.4 app.kubernetes.io/component=prometheus-configurer,app.kubernetes.io/instance=orc8r,app.kubernetes.io/name=metrics | |
orc8r-service-registry 2/2 2 2 64m service-registry docker-test.artifactory.magmacore.org/controller:1.6.1 app.kubernetes.io/component=service_registry,app.kubernetes.io/instance=orc8r,app.kubernetes.io/name=orc8r | |
orc8r-smsd 2/2 2 2 64m smsd docker-test.artifactory.magmacore.org/controller:1.6.1 app.kubernetes.io/component=smsd,app.kubernetes.io/instance=lte-orc8r,app.kubernetes.io/name=lte-orc8r | |
orc8r-state 2/2 2 2 64m state docker-test.artifactory.magmacore.org/controller:1.6.1 app.kubernetes.io/component=state,app.kubernetes.io/instance=orc8r,app.kubernetes.io/name=orc8r | |
orc8r-streamer 2/2 2 2 64m streamer docker-test.artifactory.magmacore.org/controller:1.6.1 app.kubernetes.io/component=streamer,app.kubernetes.io/instance=orc8r,app.kubernetes.io/name=orc8r | |
orc8r-subscriberdb 2/2 2 2 64m subscriberdb docker-test.artifactory.magmacore.org/controller:1.6.1 app.kubernetes.io/component=subscriberdb,app.kubernetes.io/instance=lte-orc8r,app.kubernetes.io/name=lte-orc8r | |
orc8r-subscriberdb-cache 1/1 1 1 3m53s subscriberdb-cache docker-test.artifactory.magmacore.org/controller:1.6.1 app.kubernetes.io/component=subscriberdb-cache,app.kubernetes.io/instance=lte-orc8r,app.kubernetes.io/name=lte-orc8r | |
orc8r-tenants 2/2 2 2 64m tenants docker-test.artifactory.magmacore.org/controller:1.6.1 app.kubernetes.io/component=tenants,app.kubernetes.io/instance=orc8r,app.kubernetes.io/name=orc8r | |
orc8r-user-grafana 1/1 1 1 64m user-grafana docker.io/grafana/grafana:6.6.2 app.kubernetes.io/component=user-grafana,app.kubernetes.io/instance=orc8r,app.kubernetes.io/name=metrics | |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment