Skip to content

Instantly share code, notes, and snippets.

@0x4d6165
Created January 28, 2024 15:47
Show Gist options
  • Save 0x4d6165/8c33c28049d6def56b920dd93b8a62d8 to your computer and use it in GitHub Desktop.
Save 0x4d6165/8c33c28049d6def56b920dd93b8a62d8 to your computer and use it in GitHub Desktop.
I0128 07:45:23.078370 54184 bootstrapchannelbuilder.go:149] hash 23027652fd48f64284b729cd30759dd3d5d899dd7caa6bb6ef3529a846e4cd4e
I0128 07:45:23.223563 54184 model.go:142] ignoring non-field:
I0128 07:45:23.223602 54184 model.go:142] ignoring non-field:
I0128 07:45:23.223618 54184 model.go:148] not writing field with no config tag: master
I0128 07:45:23.223632 54184 model.go:142] ignoring non-field: master
I0128 07:45:23.223646 54184 model.go:148] not writing field with no config tag: logFormat
I0128 07:45:23.223659 54184 model.go:142] ignoring non-field: logFormat
I0128 07:45:23.223674 54184 model.go:148] not writing field with no config tag: logLevel
I0128 07:45:23.223687 54184 model.go:142] ignoring non-field: logLevel
I0128 07:45:23.223701 54184 model.go:148] not writing field with no config tag: image
I0128 07:45:23.223714 54184 model.go:142] ignoring non-field: image
I0128 07:45:23.223727 54184 model.go:148] not writing field with no config tag: leaderElection
I0128 07:45:23.223740 54184 model.go:142] ignoring non-field: leaderElection
I0128 07:45:23.223767 54184 model.go:142] ignoring non-field: leaderElection
I0128 07:45:23.223780 54184 model.go:148] not writing field with no config tag: leaderElection.leaderElect
I0128 07:45:23.223793 54184 model.go:142] ignoring non-field: leaderElection.leaderElect
I0128 07:45:23.223807 54184 model.go:142] ignoring non-field: leaderElection.leaderElect
I0128 07:45:23.223821 54184 model.go:148] not writing field with no config tag: leaderElection.leaderElectLeaseDuration
I0128 07:45:23.223834 54184 model.go:142] ignoring non-field: leaderElection.leaderElectLeaseDuration
I0128 07:45:23.223848 54184 model.go:148] not writing field with no config tag: leaderElection.leaderElectRenewDeadlineDuration
I0128 07:45:23.223862 54184 model.go:142] ignoring non-field: leaderElection.leaderElectRenewDeadlineDuration
I0128 07:45:23.223876 54184 model.go:148] not writing field with no config tag: leaderElection.leaderElectResourceLock
I0128 07:45:23.223889 54184 model.go:142] ignoring non-field: leaderElection.leaderElectResourceLock
I0128 07:45:23.223902 54184 model.go:148] not writing field with no config tag: leaderElection.leaderElectResourceName
I0128 07:45:23.223915 54184 model.go:142] ignoring non-field: leaderElection.leaderElectResourceName
I0128 07:45:23.223928 54184 model.go:148] not writing field with no config tag: leaderElection.leaderElectResourceNamespace
I0128 07:45:23.223941 54184 model.go:142] ignoring non-field: leaderElection.leaderElectResourceNamespace
I0128 07:45:23.223955 54184 model.go:148] not writing field with no config tag: leaderElection.leaderElectRetryPeriod
I0128 07:45:23.223967 54184 model.go:142] ignoring non-field: leaderElection.leaderElectRetryPeriod
I0128 07:45:23.223981 54184 model.go:148] not writing field with no config tag: usePolicyConfigMap
I0128 07:45:23.223994 54184 model.go:142] ignoring non-field: usePolicyConfigMap
I0128 07:45:23.224007 54184 model.go:148] not writing field with no config tag: featureGates
I0128 07:45:23.224020 54184 model.go:142] ignoring non-field: featureGates
I0128 07:45:23.224034 54184 model.go:148] not writing field with no config tag: maxPersistentVolumes
I0128 07:45:23.224047 54184 model.go:142] ignoring non-field: maxPersistentVolumes
I0128 07:45:23.224060 54184 model.go:142] ignoring non-field: qps
I0128 07:45:23.224076 54184 model.go:148] not writing field with no config tag: authenticationKubeconfig
I0128 07:45:23.224089 54184 model.go:142] ignoring non-field: authenticationKubeconfig
I0128 07:45:23.224101 54184 model.go:148] not writing field with no config tag: authorizationKubeconfig
I0128 07:45:23.224114 54184 model.go:142] ignoring non-field: authorizationKubeconfig
I0128 07:45:23.224127 54184 model.go:148] not writing field with no config tag: authorizationAlwaysAllowPaths
I0128 07:45:23.224140 54184 model.go:142] ignoring non-field: authorizationAlwaysAllowPaths
I0128 07:45:23.224153 54184 model.go:148] not writing field with no config tag: enableProfiling
I0128 07:45:23.224166 54184 model.go:142] ignoring non-field: enableProfiling
I0128 07:45:23.224179 54184 model.go:148] not writing field with no config tag: tlsCertFile
I0128 07:45:23.224192 54184 model.go:142] ignoring non-field: tlsCertFile
I0128 07:45:23.224205 54184 model.go:148] not writing field with no config tag: tlsPrivateKeyFile
I0128 07:45:23.224218 54184 model.go:142] ignoring non-field: tlsPrivateKeyFile
I0128 07:45:23.879355 54184 build_flags.go:49] ignoring non-field:
I0128 07:45:23.879390 54184 build_flags.go:49] ignoring non-field:
I0128 07:45:24.558561 54184 build_flags.go:49] ignoring non-field:
I0128 07:45:24.558595 54184 build_flags.go:49] ignoring non-field:
I0128 07:45:24.568590 54184 task.go:150] EnsureTask ignoring identical
W0128 07:45:24.575363 54184 external_access.go:39] TODO: Harmonize gcemodel ExternalAccessModelBuilder with awsmodel
W0128 07:45:24.575429 54184 firewall.go:41] TODO: Harmonize gcemodel with awsmodel for firewall - GCE model is way too open
W0128 07:45:24.575533 54184 storageacl.go:165] adding bucket level write IAM for role "ControlPlane" to gs://wanderingwires-clusters to support etcd backup
W0128 07:45:24.575599 54184 autoscalinggroup.go:151] enabling storage-rw for etcd backups
I0128 07:45:24.576659 54184 topological_sort.go:79] Dependencies:
I0128 07:45:24.576683 54184 topological_sort.go:81] Keypair/etcd-clients-ca: []
I0128 07:45:24.576697 54184 topological_sort.go:81] ManagedFile/cluster-completed.spec: []
I0128 07:45:24.576709 54184 topological_sort.go:81] TargetPool/api-wanderingwires-k8s-local: [HTTPHealthcheck/api-wanderingwires-k8s-local]
I0128 07:45:24.576730 54184 topological_sort.go:81] Subnet/us-west2-wanderingwires-k8s-local: [Network/wanderingwires-k8s-local]
I0128 07:45:24.576743 54184 topological_sort.go:81] InstanceTemplate/nodes-us-west2-a-wanderingwires-k8s-local: [Network/wanderingwires-k8s-local Subnet/us-west2-wanderingwires-k8s-local ServiceAccount/node BootstrapScript/nodes-us-west2-a]
I0128 07:45:24.576792 54184 topological_sort.go:81] ManagedFile/wanderingwires.k8s.local-addons-gcp-pd-csi-driver.addons.k8s.io-k8s-1.23: []
I0128 07:45:24.576808 54184 topological_sort.go:81] InstanceTemplate/control-plane-us-west2-a-wanderingwires-k8s-local: [Network/wanderingwires-k8s-local Subnet/us-west2-wanderingwires-k8s-local ServiceAccount/control-plane BootstrapScript/control-plane-us-west2-a]
I0128 07:45:24.576829 54184 topological_sort.go:81] Secret/admin: []
I0128 07:45:24.576841 54184 topological_sort.go:81] ManagedFile/wanderingwires.k8s.local-addons-dns-controller.addons.k8s.io-k8s-1.12: []
I0128 07:45:24.576853 54184 topological_sort.go:81] FirewallRule/ssh-external-to-node-wanderingwires-k8s-local: [Network/wanderingwires-k8s-local]
I0128 07:45:24.576867 54184 topological_sort.go:81] BootstrapScript/nodes-us-west2-a: [Address/api-wanderingwires-k8s-local Keypair/etcd-peers-ca-main Keypair/etcd-manager-ca-events Keypair/etcd-peers-ca-events Keypair/kubernetes-ca Keypair/etcd-clients-ca Keypair/etcd-manager-ca-main]
I0128 07:45:24.576887 54184 topological_sort.go:81] ManagedFile/wanderingwires.k8s.local-addons-kubelet-api.rbac.addons.k8s.io-k8s-1.9: []
I0128 07:45:24.576900 54184 topological_sort.go:81] Keypair/etcd-peers-ca-main: []
I0128 07:45:24.576913 54184 topological_sort.go:81] StorageBucketIAM/objectadmin-wanderingwires-clusters-serviceaccount-controlplane: []
I0128 07:45:24.576926 54184 topological_sort.go:81] FirewallRule/node-to-master-wanderingwires-k8s-local: [Network/wanderingwires-k8s-local]
I0128 07:45:24.576940 54184 topological_sort.go:81] ProjectIAMBinding/serviceaccount-nodes: []
I0128 07:45:24.576953 54184 topological_sort.go:81] Keypair/etcd-peers-ca-events: []
I0128 07:45:24.576966 54184 topological_sort.go:81] ForwardingRule/api-wanderingwires-k8s-local: [TargetPool/api-wanderingwires-k8s-local Address/api-wanderingwires-k8s-local]
I0128 07:45:24.576980 54184 topological_sort.go:81] Network/wanderingwires-k8s-local: []
I0128 07:45:24.576990 54184 topological_sort.go:81] ManagedFile/wanderingwires.k8s.local-addons-networking.cilium.io-k8s-1.16: []
I0128 07:45:24.577003 54184 topological_sort.go:81] Secret/kube: []
I0128 07:45:24.577015 54184 topological_sort.go:81] ManagedFile/wanderingwires.k8s.local-addons-storage-gce.addons.k8s.io-v1.7.0: []
I0128 07:45:24.577029 54184 topological_sort.go:81] ManagedFile/wanderingwires.k8s.local-addons-kops-controller.addons.k8s.io-k8s-1.16: []
I0128 07:45:24.577042 54184 topological_sort.go:81] Secret/system:scheduler: []
I0128 07:45:24.577053 54184 topological_sort.go:81] Secret/system:controller_manager: []
I0128 07:45:24.577065 54184 topological_sort.go:81] HTTPHealthcheck/api-wanderingwires-k8s-local: []
I0128 07:45:24.577078 54184 topological_sort.go:81] ManagedFile/wanderingwires.k8s.local-addons-limit-range.addons.k8s.io: []
I0128 07:45:24.577090 54184 topological_sort.go:81] Secret/system:dns: []
I0128 07:45:24.577102 54184 topological_sort.go:81] InstanceGroupManager/a-control-plane-us-west2-a-wanderingwires-k8s-local: [InstanceTemplate/control-plane-us-west2-a-wanderingwires-k8s-local TargetPool/api-wanderingwires-k8s-local]
I0128 07:45:24.577118 54184 topological_sort.go:81] Secret/system:logging: []
I0128 07:45:24.577130 54184 topological_sort.go:81] FirewallRule/ssh-external-to-master-wanderingwires-k8s-local: [Network/wanderingwires-k8s-local]
I0128 07:45:24.577144 54184 topological_sort.go:81] Keypair/service-account: []
I0128 07:45:24.577155 54184 topological_sort.go:81] ManagedFile/wanderingwires.k8s.local-addons-coredns.addons.k8s.io-k8s-1.12: []
I0128 07:45:24.577167 54184 topological_sort.go:81] ProjectIAMBinding/serviceaccount-control-plane: []
I0128 07:45:24.577180 54184 topological_sort.go:81] InstanceGroupManager/a-nodes-us-west2-a-wanderingwires-k8s-local: [InstanceTemplate/nodes-us-west2-a-wanderingwires-k8s-local]
I0128 07:45:24.577194 54184 topological_sort.go:81] ManagedFile/manifests-static-kube-apiserver-healthcheck: []
I0128 07:45:24.577206 54184 topological_sort.go:81] ManagedFile/nodeupconfig-nodes-us-west2-a: [BootstrapScript/nodes-us-west2-a]
I0128 07:45:24.577220 54184 topological_sort.go:81] ManagedFile/nodeupconfig-control-plane-us-west2-a: [BootstrapScript/control-plane-us-west2-a]
I0128 07:45:24.577233 54184 topological_sort.go:81] Disk/a-etcd-main-wanderingwires-k8s-local: []
I0128 07:45:24.577245 54184 topological_sort.go:81] FirewallRule/node-to-node-wanderingwires-k8s-local: [Network/wanderingwires-k8s-local]
I0128 07:45:24.577259 54184 topological_sort.go:81] ManagedFile/wanderingwires.k8s.local-addons-metadata-proxy.addons.k8s.io-v0.1.12: []
I0128 07:45:24.577272 54184 topological_sort.go:81] FirewallRule/lb-health-checks-wanderingwires-k8s-local: [Network/wanderingwires-k8s-local]
I0128 07:45:24.577285 54184 topological_sort.go:81] ManagedFile/wanderingwires.k8s.local-addons-bootstrap: []
I0128 07:45:24.577297 54184 topological_sort.go:81] FirewallRule/nodeport-external-to-node-ipv6-wanderingwires-k8s-local: [Network/wanderingwires-k8s-local]
I0128 07:45:24.577310 54184 topological_sort.go:81] Secret/kube-proxy: []
I0128 07:45:24.577321 54184 topological_sort.go:81] ManagedFile/etcd-cluster-spec-events: []
I0128 07:45:24.577336 54184 topological_sort.go:81] ManagedFile/manifests-etcdmanager-events-control-plane-us-west2-a: []
I0128 07:45:24.577348 54184 topological_sort.go:81] FirewallRule/nodeport-external-to-node-wanderingwires-k8s-local: [Network/wanderingwires-k8s-local]
I0128 07:45:24.577361 54184 topological_sort.go:81] Secret/kubelet: []
I0128 07:45:24.577374 54184 topological_sort.go:81] ManagedFile/kops-version.txt: []
I0128 07:45:24.577385 54184 topological_sort.go:81] FirewallRule/master-to-master-wanderingwires-k8s-local: [Network/wanderingwires-k8s-local]
I0128 07:45:24.577399 54184 topological_sort.go:81] ManagedFile/wanderingwires.k8s.local-addons-gcp-cloud-controller.addons.k8s.io-k8s-1.23: []
I0128 07:45:24.577412 54184 topological_sort.go:81] Secret/system:monitoring: []
I0128 07:45:24.577424 54184 topological_sort.go:81] MirrorKeystore/mirror-keystore: [Secret/system:scheduler Secret/system:monitoring Secret/system:controller_manager Secret/kube Secret/kube-proxy Secret/system:logging Secret/system:dns Secret/kubelet Secret/admin]
I0128 07:45:24.577441 54184 topological_sort.go:81] BootstrapScript/control-plane-us-west2-a: [Address/api-wanderingwires-k8s-local Keypair/apiserver-aggregator-ca Keypair/service-account Keypair/kubernetes-ca Keypair/etcd-clients-ca Keypair/etcd-manager-ca-main Keypair/etcd-peers-ca-main Keypair/etcd-manager-ca-events Keypair/etcd-peers-ca-events]
I0128 07:45:24.577458 54184 topological_sort.go:81] ServiceAccount/node: []
I0128 07:45:24.577470 54184 topological_sort.go:81] Disk/a-etcd-events-wanderingwires-k8s-local: []
I0128 07:45:24.577484 54184 topological_sort.go:81] FirewallRule/ssh-external-to-master-ipv6-wanderingwires-k8s-local: [Network/wanderingwires-k8s-local]
I0128 07:45:24.577497 54184 topological_sort.go:81] FirewallRule/ssh-external-to-node-ipv6-wanderingwires-k8s-local: [Network/wanderingwires-k8s-local]
I0128 07:45:24.577509 54184 topological_sort.go:81] Keypair/etcd-manager-ca-events: []
I0128 07:45:24.577521 54184 topological_sort.go:81] MirrorSecrets/mirror-secrets: [Secret/kubelet Secret/admin Secret/system:scheduler Secret/system:monitoring Secret/system:controller_manager Secret/kube Secret/kube-proxy Secret/system:logging Secret/system:dns]
I0128 07:45:24.577538 54184 topological_sort.go:81] ServiceAccount/control-plane: []
I0128 07:45:24.577552 54184 topological_sort.go:81] ManagedFile/manifests-etcdmanager-main-control-plane-us-west2-a: []
I0128 07:45:24.577565 54184 topological_sort.go:81] Keypair/etcd-manager-ca-main: []
I0128 07:45:24.577577 54184 topological_sort.go:81] PoolHealthCheck/api-wanderingwires-k8s-local: [TargetPool/api-wanderingwires-k8s-local HTTPHealthcheck/api-wanderingwires-k8s-local]
I0128 07:45:24.577591 54184 topological_sort.go:81] FirewallRule/https-api-ipv6-wanderingwires-k8s-local: [Network/wanderingwires-k8s-local]
I0128 07:45:24.577604 54184 topological_sort.go:81] Address/api-wanderingwires-k8s-local: []
I0128 07:45:24.577617 54184 topological_sort.go:81] Keypair/kubernetes-ca: []
I0128 07:45:24.577630 54184 topological_sort.go:81] ManagedFile/etcd-cluster-spec-main: []
I0128 07:45:24.577642 54184 topological_sort.go:81] Keypair/apiserver-aggregator-ca: []
I0128 07:45:24.577654 54184 topological_sort.go:81] FirewallRule/master-to-node-wanderingwires-k8s-local: [Network/wanderingwires-k8s-local]
I0128 07:45:24.577668 54184 topological_sort.go:81] FirewallRule/https-api-wanderingwires-k8s-local: [Network/wanderingwires-k8s-local]
I0128 07:45:24.577916 54184 executor.go:111] Tasks: 0 done / 72 total; 45 can run
I0128 07:45:24.578138 54184 executor.go:192] Executing task "ProjectIAMBinding/serviceaccount-control-plane": *gcetasks.ProjectIAMBinding {"Name":"serviceaccount-control-plane","Lifecycle":"Sync","Project":"wanderingwires","Member":"serviceAccount:control-plane-wandering-ohoh4e@wanderingwires.iam.gserviceaccount.com","Role":"roles/container.serviceAgent"}
I0128 07:45:24.578181 54184 executor.go:192] Executing task "HTTPHealthcheck/api-wanderingwires-k8s-local": *gcetasks.HTTPHealthcheck {"Name":"api-wanderingwires-k8s-local","Lifecycle":"Sync","SelfLink":"","Port":3990,"RequestPath":"/healthz"}
I0128 07:45:24.578251 54184 executor.go:192] Executing task "Disk/a-etcd-main-wanderingwires-k8s-local": *gcetasks.Disk {"Name":"a-etcd-main-wanderingwires-k8s-local","Lifecycle":"Sync","VolumeType":"pd-ssd","SizeGB":20,"Zone":"us-west2-a","Labels":{"k8s-io-cluster-name":"wanderingwires-k8s-local","k8s-io-etcd-main":"a-2fa","k8s-io-role-master":"master"}}
I0128 07:45:24.578502 54184 projectiambinding.go:56] Checking IAM for project "wanderingwires"
I0128 07:45:24.578549 54184 executor.go:192] Executing task "Disk/a-etcd-events-wanderingwires-k8s-local": *gcetasks.Disk {"Name":"a-etcd-events-wanderingwires-k8s-local","Lifecycle":"Sync","VolumeType":"pd-ssd","SizeGB":20,"Zone":"us-west2-a","Labels":{"k8s-io-cluster-name":"wanderingwires-k8s-local","k8s-io-etcd-events":"a-2fa","k8s-io-role-master":"master"}}
I0128 07:45:24.578451 54184 executor.go:192] Executing task "Address/api-wanderingwires-k8s-local": *gcetasks.Address {"Name":"api-wanderingwires-k8s-local","Lifecycle":"Sync","IPAddress":null,"IPAddressType":null,"Purpose":null,"ForAPIServer":true,"Subnetwork":null}
I0128 07:45:24.578736 54184 executor.go:192] Executing task "Network/wanderingwires-k8s-local": *gcetasks.Network {"Name":"wanderingwires-k8s-local","Project":null,"Lifecycle":"Sync","Mode":"custom","CIDR":null,"Shared":false}
I0128 07:45:24.578395 54184 executor.go:192] Executing task "ServiceAccount/node": *gcetasks.ServiceAccount {"Name":"node","Lifecycle":"Sync","Email":"node-wanderingwires-k8s-local@wanderingwires.iam.gserviceaccount.com","Description":"kubernetes worker nodes","DisplayName":"node","Shared":null}
I0128 07:45:24.578892 54184 executor.go:192] Executing task "StorageBucketIAM/objectadmin-wanderingwires-clusters-serviceaccount-controlplane": *gcetasks.StorageBucketIAM {"Name":"objectadmin-wanderingwires-clusters-serviceaccount-controlplane","Lifecycle":"WarnIfInsufficientAccess","Bucket":"wanderingwires-clusters","Member":"serviceAccount:control-plane-wandering-ohoh4e@wanderingwires.iam.gserviceaccount.com","Role":"roles/storage.objectAdmin"}
I0128 07:45:24.578141 54184 executor.go:192] Executing task "ServiceAccount/control-plane": *gcetasks.ServiceAccount {"Name":"control-plane","Lifecycle":"Sync","Email":"control-plane-wandering-ohoh4e@wanderingwires.iam.gserviceaccount.com","Description":"kubernetes control-plane instances","DisplayName":"control-plane","Shared":null}
I0128 07:45:24.579024 54184 storagebucketiam.go:56] Checking GCS bucket IAM for gs://wanderingwires-clusters for serviceAccount:control-plane-wandering-ohoh4e@wanderingwires.iam.gserviceaccount.com
I0128 07:45:24.579233 54184 executor.go:192] Executing task "ProjectIAMBinding/serviceaccount-nodes": *gcetasks.ProjectIAMBinding {"Name":"serviceaccount-nodes","Lifecycle":"Sync","Project":"wanderingwires","Member":"serviceAccount:node-wanderingwires-k8s-local@wanderingwires.iam.gserviceaccount.com","Role":"roles/compute.viewer"}
I0128 07:45:24.579291 54184 projectiambinding.go:56] Checking IAM for project "wanderingwires"
I0128 07:45:24.578015 54184 executor.go:192] Executing task "Secret/system:dns": *fitasks.Secret {"Name":"system:dns","Lifecycle":"Sync"}
I0128 07:45:24.578508 54184 executor.go:192] Executing task "Secret/kubelet": *fitasks.Secret {"Name":"kubelet","Lifecycle":"Sync"}
I0128 07:45:24.579272 54184 executor.go:192] Executing task "Keypair/apiserver-aggregator-ca": *fitasks.Keypair {"Name":"apiserver-aggregator-ca","alternateNames":null,"Lifecycle":"Sync","Signer":null,"subject":"cn=apiserver-aggregator-ca","issuer":"","type":"ca","oldFormat":false}
I0128 07:45:24.582004 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/secrets/system:dns"
I0128 07:45:24.582011 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/secrets/kubelet"
I0128 07:45:24.578048 54184 executor.go:192] Executing task "ManagedFile/manifests-etcdmanager-events-control-plane-us-west2-a": *fitasks.ManagedFile {"Name":"manifests-etcdmanager-events-control-plane-us-west2-a","Lifecycle":"Sync","Base":null,"Location":"manifests/etcd/events-control-plane-us-west2-a.yaml","Contents":"apiVersion: v1\nkind: Pod\nmetadata:\n creationTimestamp: null\n labels:\n k8s-app: etcd-manager-events\n name: etcd-manager-events\n namespace: kube-system\nspec:\n containers:\n - command:\n - /bin/sh\n - -c\n - mkfifo /tmp/pipe; (tee -a /var/log/etcd.log \u003c /tmp/pipe \u0026 ) ; exec /etcd-manager\n --backup-store=gs://wanderingwires-clusters/wanderingwires.k8s.local/backups/etcd/events\n --client-urls=https://__name__:4002 --cluster-name=etcd-events --containerized=true\n --dns-suffix=.internal.wanderingwires.k8s.local --grpc-port=3997 --peer-urls=https://__name__:2381\n --quarantine-client-urls=https://__name__:3995 --v=6 --volume-name-tag=k8s-io-etcd-events\n --volume-provider=gce --volume-tag=k8s-io-cluster-name=wanderingwires-k8s-local\n --volume-tag=k8s-io-etcd-events --volume-tag=k8s-io-role-master=master \u003e /tmp/pipe\n 2\u003e\u00261\n env:\n - name: S3_ACCESS_KEY_ID\n value: 00461048e0214e00000000003\n - name: S3_ENDPOINT\n value: s3.us-west-004.backblazeb2.com\n - name: S3_REGION\n value: us-west-004\n - name: S3_SECRET_ACCESS_KEY\n value: K004PyzP5+Z6baV4+H2aG3npxAQlT60\n - name: ETCD_MANAGER_DAILY_BACKUPS_RETENTION\n value: 90d\n image: registry.k8s.io/etcdadm/etcd-manager-slim:v3.0.20230630@sha256:45d67190fc70de2affef3c9efc0810c288a5f40b1a9717134a299dff9dc3f122\n name: etcd-manager\n resources:\n requests:\n cpu: 100m\n memory: 100Mi\n securityContext:\n privileged: true\n volumeMounts:\n - mountPath: /rootfs\n name: rootfs\n - mountPath: /run\n name: run\n - mountPath: /etc/kubernetes/pki/etcd-manager\n name: pki\n - mountPath: /opt\n name: opt\n - mountPath: /var/log/etcd.log\n name: varlogetcd\n hostNetwork: true\n hostPID: true\n initContainers:\n - args:\n - --target-dir=/opt/kops-utils/\n - --src=/ko-app/kops-utils-cp\n command:\n - /ko-app/kops-utils-cp\n image: registry.k8s.io/kops/kops-utils-cp:1.28.3@sha256:56708df2b9edab97afa6cd4ed1879b55c26202872471a8bf59378ad62ea89751\n name: kops-utils-cp\n resources: {}\n volumeMounts:\n - mountPath: /opt\n name: opt\n - args:\n - --target-dir=/opt/etcd-v3.4.13\n - --src=/usr/local/bin/etcd\n - --src=/usr/local/bin/etcdctl\n command:\n - /opt/kops-utils/kops-utils-cp\n image: registry.k8s.io/etcd:3.4.13-0@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2\n name: init-etcd-3-4-13\n resources: {}\n volumeMounts:\n - mountPath: /opt\n name: opt\n - args:\n - --target-dir=/opt/etcd-v3.5.9\n - --src=/usr/local/bin/etcd\n - --src=/usr/local/bin/etcdctl\n command:\n - /opt/kops-utils/kops-utils-cp\n image: registry.k8s.io/etcd:3.5.9-0@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3\n name: init-etcd-3-5-9\n resources: {}\n volumeMounts:\n - mountPath: /opt\n name: opt\n - args:\n - --symlink\n - --target-dir=/opt/etcd-v3.4.3\n - --src=/opt/etcd-v3.4.13/etcd\n - --src=/opt/etcd-v3.4.13/etcdctl\n command:\n - /opt/kops-utils/kops-utils-cp\n image: registry.k8s.io/kops/kops-utils-cp:1.28.3@sha256:56708df2b9edab97afa6cd4ed1879b55c26202872471a8bf59378ad62ea89751\n name: init-etcd-symlinks-3-4-13\n resources: {}\n volumeMounts:\n - mountPath: /opt\n name: opt\n - args:\n - --symlink\n - --target-dir=/opt/etcd-v3.5.0\n - --target-dir=/opt/etcd-v3.5.1\n - --target-dir=/opt/etcd-v3.5.3\n - --target-dir=/opt/etcd-v3.5.4\n - --target-dir=/opt/etcd-v3.5.6\n - --target-dir=/opt/etcd-v3.5.7\n - --src=/opt/etcd-v3.5.9/etcd\n - --src=/opt/etcd-v3.5.9/etcdctl\n command:\n - /opt/kops-utils/kops-utils-cp\n image: registry.k8s.io/kops/kops-utils-cp:1.28.3@sha256:56708df2b9edab97afa6cd4ed1879b55c26202872471a8bf59378ad62ea89751\n name: init-etcd-symlinks-3-5-9\n resources: {}\n volumeMounts:\n - mountPath: /opt\n name: opt\n priorityClassName: system-cluster-critical\n tolerations:\n - key: CriticalAddonsOnly\n operator: Exists\n volumes:\n - hostPath:\n path: /\n type: Directory\n name: rootfs\n - hostPath:\n path: /run\n type: DirectoryOrCreate\n name: run\n - hostPath:\n path: /etc/kubernetes/pki/etcd-manager-events\n type: DirectoryOrCreate\n name: pki\n - emptyDir: {}\n name: opt\n - hostPath:\n path: /var/log/etcd-events.log\n type: FileOrCreate\n name: varlogetcd\nstatus: {}\n","PublicACL":null}
I0128 07:45:24.578376 54184 executor.go:192] Executing task "Keypair/kubernetes-ca": *fitasks.Keypair {"Name":"kubernetes-ca","alternateNames":null,"Lifecycle":"Sync","Signer":null,"subject":"cn=kubernetes-ca","issuer":"","type":"ca","oldFormat":false}
I0128 07:45:24.578674 54184 executor.go:192] Executing task "Secret/system:monitoring": *fitasks.Secret {"Name":"system:monitoring","Lifecycle":"Sync"}
I0128 07:45:24.578427 54184 executor.go:192] Executing task "Keypair/etcd-clients-ca": *fitasks.Keypair {"Name":"etcd-clients-ca","alternateNames":null,"Lifecycle":"Sync","Signer":null,"subject":"cn=etcd-clients-ca","issuer":"","type":"ca","oldFormat":false}
I0128 07:45:24.578872 54184 executor.go:192] Executing task "Secret/system:logging": *fitasks.Secret {"Name":"system:logging","Lifecycle":"Sync"}
I0128 07:45:24.578581 54184 executor.go:192] Executing task "Secret/system:scheduler": *fitasks.Secret {"Name":"system:scheduler","Lifecycle":"Sync"}
I0128 07:45:24.582153 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/secrets/system:logging"
I0128 07:45:24.582172 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/secrets/system:scheduler"
I0128 07:45:24.579205 54184 executor.go:192] Executing task "Secret/system:controller_manager": *fitasks.Secret {"Name":"system:controller_manager","Lifecycle":"Sync"}
I0128 07:45:24.578089 54184 executor.go:192] Executing task "ManagedFile/wanderingwires.k8s.local-addons-dns-controller.addons.k8s.io-k8s-1.12": *fitasks.ManagedFile {"Name":"wanderingwires.k8s.local-addons-dns-controller.addons.k8s.io-k8s-1.12","Lifecycle":"Sync","Base":null,"Location":"addons/dns-controller.addons.k8s.io/k8s-1.12.yaml","Contents":"apiVersion: apps/v1\nkind: Deployment\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: dns-controller.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: dns-controller.addons.k8s.io\n k8s-app: dns-controller\n version: v1.28.3\n name: dns-controller\n namespace: kube-system\nspec:\n replicas: 1\n selector:\n matchLabels:\n k8s-app: dns-controller\n strategy:\n type: Recreate\n template:\n metadata:\n creationTimestamp: null\n labels:\n k8s-addon: dns-controller.addons.k8s.io\n k8s-app: dns-controller\n kops.k8s.io/managed-by: kops\n version: v1.28.3\n spec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n - matchExpressions:\n - key: node-role.kubernetes.io/master\n operator: Exists\n containers:\n - args:\n - --watch-ingress=false\n - --dns=gossip\n - --gossip-seed=127.0.0.1:3999\n - --gossip-protocol-secondary=memberlist\n - --gossip-listen-secondary=0.0.0.0:3993\n - --gossip-seed-secondary=127.0.0.1:4000\n - --internal-ipv4\n - --zone=*/*\n - -v=2\n command: null\n env:\n - name: KUBERNETES_SERVICE_HOST\n value: 127.0.0.1\n image: registry.k8s.io/kops/dns-controller:1.28.3@sha256:44e0c6c8cb0f4ce819b0a052e20206310a88dfbc2fcec71cfde51a717b69f114\n name: dns-controller\n resources:\n requests:\n cpu: 50m\n memory: 50Mi\n securityContext:\n runAsNonRoot: true\n dnsPolicy: Default\n hostNetwork: true\n nodeSelector: null\n priorityClassName: system-cluster-critical\n serviceAccount: dns-controller\n tolerations:\n - key: node.cloudprovider.kubernetes.io/uninitialized\n operator: Exists\n - key: node.kubernetes.io/not-ready\n operator: Exists\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n - key: node-role.kubernetes.io/master\n operator: Exists\n\n---\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: dns-controller.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: dns-controller.addons.k8s.io\n name: dns-controller\n namespace: kube-system\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: dns-controller.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: dns-controller.addons.k8s.io\n name: kops:dns-controller\nrules:\n- apiGroups:\n - \"\"\n resources:\n - endpoints\n - services\n - pods\n - ingress\n - nodes\n verbs:\n - get\n - list\n - watch\n- apiGroups:\n - networking.k8s.io\n resources:\n - ingresses\n verbs:\n - get\n - list\n - watch\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: dns-controller.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: dns-controller.addons.k8s.io\n name: kops:dns-controller\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: kops:dns-controller\nsubjects:\n- apiGroup: rbac.authorization.k8s.io\n kind: User\n name: system:serviceaccount:kube-system:dns-controller","PublicACL":null}
I0128 07:45:24.582263 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/secrets/system:controller_manager"
I0128 07:45:24.578694 54184 executor.go:192] Executing task "Keypair/etcd-peers-ca-events": *fitasks.Keypair {"Name":"etcd-peers-ca-events","alternateNames":null,"Lifecycle":"Sync","Signer":null,"subject":"cn=etcd-peers-ca-events","issuer":"","type":"ca","oldFormat":false}
I0128 07:45:24.578464 54184 executor.go:192] Executing task "Keypair/etcd-manager-ca-main": *fitasks.Keypair {"Name":"etcd-manager-ca-main","alternateNames":null,"Lifecycle":"Sync","Signer":null,"subject":"cn=etcd-manager-ca-main","issuer":"","type":"ca","oldFormat":false}
I0128 07:45:24.578449 54184 executor.go:192] Executing task "Keypair/etcd-manager-ca-events": *fitasks.Keypair {"Name":"etcd-manager-ca-events","alternateNames":null,"Lifecycle":"Sync","Signer":null,"subject":"cn=etcd-manager-ca-events","issuer":"","type":"ca","oldFormat":false}
I0128 07:45:24.578303 54184 executor.go:192] Executing task "Secret/admin": *fitasks.Secret {"Name":"admin","Lifecycle":"Sync"}
I0128 07:45:24.578715 54184 executor.go:192] Executing task "Keypair/service-account": *fitasks.Keypair {"Name":"service-account","alternateNames":null,"Lifecycle":"Sync","Signer":null,"subject":"cn=service-account","issuer":"","type":"ca","oldFormat":false}
I0128 07:45:24.582136 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/secrets/system:monitoring"
I0128 07:45:24.582395 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/secrets/admin"
I0128 07:45:24.578335 54184 executor.go:192] Executing task "Secret/kube": *fitasks.Secret {"Name":"kube","Lifecycle":"Sync"}
I0128 07:45:24.578324 54184 executor.go:192] Executing task "ManagedFile/manifests-etcdmanager-main-control-plane-us-west2-a": *fitasks.ManagedFile {"Name":"manifests-etcdmanager-main-control-plane-us-west2-a","Lifecycle":"Sync","Base":null,"Location":"manifests/etcd/main-control-plane-us-west2-a.yaml","Contents":"apiVersion: v1\nkind: Pod\nmetadata:\n creationTimestamp: null\n labels:\n k8s-app: etcd-manager-main\n name: etcd-manager-main\n namespace: kube-system\nspec:\n containers:\n - command:\n - /bin/sh\n - -c\n - mkfifo /tmp/pipe; (tee -a /var/log/etcd.log \u003c /tmp/pipe \u0026 ) ; exec /etcd-manager\n --backup-store=gs://wanderingwires-clusters/wanderingwires.k8s.local/backups/etcd/main\n --client-urls=https://__name__:4001 --cluster-name=etcd --containerized=true\n --dns-suffix=.internal.wanderingwires.k8s.local --grpc-port=3996 --peer-urls=https://__name__:2380\n --quarantine-client-urls=https://__name__:3994 --v=6 --volume-name-tag=k8s-io-etcd-main\n --volume-provider=gce --volume-tag=k8s-io-cluster-name=wanderingwires-k8s-local\n --volume-tag=k8s-io-etcd-main --volume-tag=k8s-io-role-master=master \u003e /tmp/pipe\n 2\u003e\u00261\n env:\n - name: S3_ACCESS_KEY_ID\n value: 00461048e0214e00000000003\n - name: S3_ENDPOINT\n value: s3.us-west-004.backblazeb2.com\n - name: S3_REGION\n value: us-west-004\n - name: S3_SECRET_ACCESS_KEY\n value: K004PyzP5+Z6baV4+H2aG3npxAQlT60\n - name: ETCD_MANAGER_DAILY_BACKUPS_RETENTION\n value: 90d\n image: registry.k8s.io/etcdadm/etcd-manager-slim:v3.0.20230630@sha256:45d67190fc70de2affef3c9efc0810c288a5f40b1a9717134a299dff9dc3f122\n name: etcd-manager\n resources:\n requests:\n cpu: 200m\n memory: 100Mi\n securityContext:\n privileged: true\n volumeMounts:\n - mountPath: /rootfs\n name: rootfs\n - mountPath: /run\n name: run\n - mountPath: /etc/kubernetes/pki/etcd-manager\n name: pki\n - mountPath: /opt\n name: opt\n - mountPath: /var/log/etcd.log\n name: varlogetcd\n hostNetwork: true\n hostPID: true\n initContainers:\n - args:\n - --target-dir=/opt/kops-utils/\n - --src=/ko-app/kops-utils-cp\n command:\n - /ko-app/kops-utils-cp\n image: registry.k8s.io/kops/kops-utils-cp:1.28.3@sha256:56708df2b9edab97afa6cd4ed1879b55c26202872471a8bf59378ad62ea89751\n name: kops-utils-cp\n resources: {}\n volumeMounts:\n - mountPath: /opt\n name: opt\n - args:\n - --target-dir=/opt/etcd-v3.4.13\n - --src=/usr/local/bin/etcd\n - --src=/usr/local/bin/etcdctl\n command:\n - /opt/kops-utils/kops-utils-cp\n image: registry.k8s.io/etcd:3.4.13-0@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2\n name: init-etcd-3-4-13\n resources: {}\n volumeMounts:\n - mountPath: /opt\n name: opt\n - args:\n - --target-dir=/opt/etcd-v3.5.9\n - --src=/usr/local/bin/etcd\n - --src=/usr/local/bin/etcdctl\n command:\n - /opt/kops-utils/kops-utils-cp\n image: registry.k8s.io/etcd:3.5.9-0@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3\n name: init-etcd-3-5-9\n resources: {}\n volumeMounts:\n - mountPath: /opt\n name: opt\n - args:\n - --symlink\n - --target-dir=/opt/etcd-v3.4.3\n - --src=/opt/etcd-v3.4.13/etcd\n - --src=/opt/etcd-v3.4.13/etcdctl\n command:\n - /opt/kops-utils/kops-utils-cp\n image: registry.k8s.io/kops/kops-utils-cp:1.28.3@sha256:56708df2b9edab97afa6cd4ed1879b55c26202872471a8bf59378ad62ea89751\n name: init-etcd-symlinks-3-4-13\n resources: {}\n volumeMounts:\n - mountPath: /opt\n name: opt\n - args:\n - --symlink\n - --target-dir=/opt/etcd-v3.5.0\n - --target-dir=/opt/etcd-v3.5.1\n - --target-dir=/opt/etcd-v3.5.3\n - --target-dir=/opt/etcd-v3.5.4\n - --target-dir=/opt/etcd-v3.5.6\n - --target-dir=/opt/etcd-v3.5.7\n - --src=/opt/etcd-v3.5.9/etcd\n - --src=/opt/etcd-v3.5.9/etcdctl\n command:\n - /opt/kops-utils/kops-utils-cp\n image: registry.k8s.io/kops/kops-utils-cp:1.28.3@sha256:56708df2b9edab97afa6cd4ed1879b55c26202872471a8bf59378ad62ea89751\n name: init-etcd-symlinks-3-5-9\n resources: {}\n volumeMounts:\n - mountPath: /opt\n name: opt\n priorityClassName: system-cluster-critical\n tolerations:\n - key: CriticalAddonsOnly\n operator: Exists\n volumes:\n - hostPath:\n path: /\n type: Directory\n name: rootfs\n - hostPath:\n path: /run\n type: DirectoryOrCreate\n name: run\n - hostPath:\n path: /etc/kubernetes/pki/etcd-manager-main\n type: DirectoryOrCreate\n name: pki\n - emptyDir: {}\n name: opt\n - hostPath:\n path: /var/log/etcd.log\n type: FileOrCreate\n name: varlogetcd\nstatus: {}\n","PublicACL":null}
I0128 07:45:24.584763 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/pki/private/etcd-manager-ca-events/keyset.yaml"
I0128 07:45:24.578654 54184 executor.go:192] Executing task "ManagedFile/wanderingwires.k8s.local-addons-kops-controller.addons.k8s.io-k8s-1.16": *fitasks.ManagedFile {"Name":"wanderingwires.k8s.local-addons-kops-controller.addons.k8s.io-k8s-1.16","Lifecycle":"Sync","Base":null,"Location":"addons/kops-controller.addons.k8s.io/k8s-1.16.yaml","Contents":"apiVersion: v1\ndata:\n config.yaml: |\n {\"clusterName\":\"wanderingwires.k8s.local\",\"cloud\":\"gce\",\"configBase\":\"gs://wanderingwires-clusters/wanderingwires.k8s.local\",\"secretStore\":\"gs://wanderingwires-clusters/wanderingwires.k8s.local/secrets\",\"server\":{\"Listen\":\":3988\",\"provider\":{\"gce\":{\"projectID\":\"wanderingwires\",\"region\":\"us-west2\",\"clusterName\":\"wanderingwires.k8s.local\",\"MaxTimeSkew\":300}},\"serverKeyPath\":\"/etc/kubernetes/kops-controller/pki/kops-controller.key\",\"serverCertificatePath\":\"/etc/kubernetes/kops-controller/pki/kops-controller.crt\",\"caBasePath\":\"/etc/kubernetes/kops-controller/pki\",\"signingCAs\":[\"kubernetes-ca\"],\"certNames\":[\"kubelet\",\"kubelet-server\"]},\"discovery\":{\"enabled\":true}}\nkind: ConfigMap\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: kops-controller.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: kops-controller.addons.k8s.io\n name: kops-controller\n namespace: kube-system\n\n---\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: kops-controller.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: kops-controller.addons.k8s.io\n k8s-app: kops-controller\n version: v1.28.3\n name: kops-controller\n namespace: kube-system\nspec:\n selector:\n matchLabels:\n k8s-app: kops-controller\n template:\n metadata:\n annotations:\n dns.alpha.kubernetes.io/internal: kops-controller.internal.wanderingwires.k8s.local\n creationTimestamp: null\n labels:\n k8s-addon: kops-controller.addons.k8s.io\n k8s-app: kops-controller\n kops.k8s.io/managed-by: kops\n version: v1.28.3\n spec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n - key: kops.k8s.io/kops-controller-pki\n operator: Exists\n - matchExpressions:\n - key: node-role.kubernetes.io/master\n operator: Exists\n - key: kops.k8s.io/kops-controller-pki\n operator: Exists\n containers:\n - args:\n - --v=2\n - --conf=/etc/kubernetes/kops-controller/config/config.yaml\n command: null\n env:\n - name: KUBERNETES_SERVICE_HOST\n value: 127.0.0.1\n - name: S3_ACCESS_KEY_ID\n value: 00461048e0214e00000000003\n - name: S3_ENDPOINT\n value: s3.us-west-004.backblazeb2.com\n - name: S3_REGION\n value: us-west-004\n - name: S3_SECRET_ACCESS_KEY\n value: K004PyzP5+Z6baV4+H2aG3npxAQlT60\n image: registry.k8s.io/kops/kops-controller:1.28.3@sha256:606151494e5a0a73d2edec2945c09264e3b0e85bad36b52c59b954bff1ee7e19\n name: kops-controller\n resources:\n requests:\n cpu: 50m\n memory: 50Mi\n securityContext:\n runAsNonRoot: true\n runAsUser: 10011\n volumeMounts:\n - mountPath: /etc/kubernetes/kops-controller/config/\n name: kops-controller-config\n - mountPath: /etc/kubernetes/kops-controller/pki/\n name: kops-controller-pki\n dnsPolicy: Default\n hostNetwork: true\n nodeSelector: null\n priorityClassName: system-cluster-critical\n serviceAccount: kops-controller\n tolerations:\n - key: node.cloudprovider.kubernetes.io/uninitialized\n operator: Exists\n - key: node.kubernetes.io/not-ready\n operator: Exists\n - key: node-role.kubernetes.io/master\n operator: Exists\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n volumes:\n - configMap:\n name: kops-controller\n name: kops-controller-config\n - hostPath:\n path: /etc/kubernetes/kops-controller/\n type: Directory\n name: kops-controller-pki\n updateStrategy:\n type: OnDelete\n\n---\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: kops-controller.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: kops-controller.addons.k8s.io\n name: kops-controller\n namespace: kube-system\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: kops-controller.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: kops-controller.addons.k8s.io\n name: kops-controller\nrules:\n- apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - get\n - list\n - watch\n - patch\n- apiGroups:\n - \"\"\n resources:\n - endpoints\n verbs:\n - get\n - list\n - watch\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: kops-controller.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: kops-controller.addons.k8s.io\n name: kops-controller\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: kops-controller\nsubjects:\n- apiGroup: rbac.authorization.k8s.io\n kind: User\n name: system:serviceaccount:kube-system:kops-controller\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: kops-controller.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: kops-controller.addons.k8s.io\n name: kops-controller\n namespace: kube-system\nrules:\n- apiGroups:\n - \"\"\n resources:\n - events\n verbs:\n - get\n - list\n - watch\n - create\n- apiGroups:\n - \"\"\n - coordination.k8s.io\n resourceNames:\n - kops-controller-leader\n resources:\n - configmaps\n - leases\n verbs:\n - get\n - list\n - watch\n - patch\n - update\n - delete\n- apiGroups:\n - \"\"\n - coordination.k8s.io\n resources:\n - configmaps\n - leases\n verbs:\n - create\n- apiGroups:\n - \"\"\n resourceNames:\n - coredns\n resources:\n - configmaps\n verbs:\n - get\n - watch\n - patch\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: kops-controller.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: kops-controller.addons.k8s.io\n name: kops-controller\n namespace: kube-system\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: Role\n name: kops-controller\nsubjects:\n- apiGroup: rbac.authorization.k8s.io\n kind: User\n name: system:serviceaccount:kube-system:kops-controller\n\n---\n\napiVersion: v1\nkind: Service\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: kops-controller.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n discovery.kops.k8s.io/internal-name: api\n k8s-addon: kops-controller.addons.k8s.io\n name: api-internal\n namespace: kube-system\nspec:\n clusterIP: None\n ports:\n - name: https\n port: 443\n protocol: TCP\n targetPort: 443\n selector:\n k8s-app: kops-controller\n type: ClusterIP\n\n---\n\napiVersion: v1\nkind: Service\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: kops-controller.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n discovery.kops.k8s.io/internal-name: kops-controller\n k8s-addon: kops-controller.addons.k8s.io\n name: kops-controller-internal\n namespace: kube-system\nspec:\n clusterIP: None\n ports:\n - name: https\n port: 3988\n protocol: TCP\n targetPort: 3988\n selector:\n k8s-app: kops-controller\n type: ClusterIP","PublicACL":null}
I0128 07:45:24.585042 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/addons/kops-controller.addons.k8s.io/k8s-1.16.yaml"
I0128 07:45:24.585094 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/pki/private/service-account/keyset.yaml"
I0128 07:45:24.582427 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/secrets/kube"
I0128 07:45:24.584876 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/manifests/etcd/main-control-plane-us-west2-a.yaml"
I0128 07:45:24.578390 54184 executor.go:192] Executing task "ManagedFile/wanderingwires.k8s.local-addons-gcp-pd-csi-driver.addons.k8s.io-k8s-1.23": *fitasks.ManagedFile {"Name":"wanderingwires.k8s.local-addons-gcp-pd-csi-driver.addons.k8s.io-k8s-1.23","Lifecycle":"Sync","Base":null,"Location":"addons/gcp-pd-csi-driver.addons.k8s.io/k8s-1.23.yaml","Contents":"allowVolumeExpansion: true\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n annotations:\n storageclass.kubernetes.io/is-default-class: \"true\"\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-pd-csi-driver.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-pd-csi-driver.addons.k8s.io\n kubernetes.io/cluster-service: \"true\"\n name: standard-csi\nparameters:\n type: pd-standard\nprovisioner: pd.csi.storage.gke.io\nvolumeBindingMode: WaitForFirstConsumer\n\n---\n\napiVersion: v1\nkind: Namespace\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-pd-csi-driver.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-pd-csi-driver.addons.k8s.io\n name: gce-pd-csi-driver\n\n---\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-pd-csi-driver.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-pd-csi-driver.addons.k8s.io\n name: csi-gce-pd-controller-sa\n namespace: gce-pd-csi-driver\n\n---\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-pd-csi-driver.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-pd-csi-driver.addons.k8s.io\n name: csi-gce-pd-node-sa\n namespace: gce-pd-csi-driver\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-pd-csi-driver.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-pd-csi-driver.addons.k8s.io\n k8s-app: gcp-compute-persistent-disk-csi-driver\n name: csi-gce-pd-leaderelection-role\n namespace: gce-pd-csi-driver\nrules:\n- apiGroups:\n - coordination.k8s.io\n resources:\n - leases\n verbs:\n - get\n - watch\n - list\n - delete\n - update\n - create\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-pd-csi-driver.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-pd-csi-driver.addons.k8s.io\n name: csi-gce-pd-attacher-role\nrules:\n- apiGroups:\n - \"\"\n resources:\n - persistentvolumes\n verbs:\n - get\n - list\n - watch\n - update\n - patch\n- apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - get\n - list\n - watch\n- apiGroups:\n - storage.k8s.io\n resources:\n - csinodes\n verbs:\n - get\n - list\n - watch\n- apiGroups:\n - storage.k8s.io\n resources:\n - volumeattachments\n verbs:\n - get\n - list\n - watch\n - update\n - patch\n- apiGroups:\n - storage.k8s.io\n resources:\n - volumeattachments/status\n verbs:\n - patch\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-pd-csi-driver.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-pd-csi-driver.addons.k8s.io\n name: csi-gce-pd-provisioner-role\nrules:\n- apiGroups:\n - \"\"\n resources:\n - persistentvolumes\n verbs:\n - get\n - list\n - watch\n - create\n - delete\n- apiGroups:\n - \"\"\n resources:\n - persistentvolumeclaims\n verbs:\n - get\n - list\n - watch\n - update\n- apiGroups:\n - storage.k8s.io\n resources:\n - storageclasses\n verbs:\n - get\n - list\n - watch\n- apiGroups:\n - \"\"\n resources:\n - events\n verbs:\n - list\n - watch\n - create\n - update\n - patch\n- apiGroups:\n - storage.k8s.io\n resources:\n - csinodes\n verbs:\n - get\n - list\n - watch\n- apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - get\n - list\n - watch\n- apiGroups:\n - snapshot.storage.k8s.io\n resources:\n - volumesnapshots\n verbs:\n - get\n - list\n- apiGroups:\n - snapshot.storage.k8s.io\n resources:\n - volumesnapshotcontents\n verbs:\n - get\n - list\n- apiGroups:\n - storage.k8s.io\n resources:\n - volumeattachments\n verbs:\n - get\n - list\n - watch\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-pd-csi-driver.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-pd-csi-driver.addons.k8s.io\n name: csi-gce-pd-resizer-role\nrules:\n- apiGroups:\n - \"\"\n resources:\n - persistentvolumes\n verbs:\n - get\n - list\n - watch\n - update\n - patch\n- apiGroups:\n - \"\"\n resources:\n - persistentvolumeclaims\n verbs:\n - get\n - list\n - watch\n- apiGroups:\n - \"\"\n resources:\n - persistentvolumeclaims/status\n verbs:\n - update\n - patch\n- apiGroups:\n - \"\"\n resources:\n - events\n verbs:\n - list\n - watch\n - create\n - update\n - patch\n- apiGroups:\n - \"\"\n resources:\n - pods\n verbs:\n - get\n - list\n - watch\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-pd-csi-driver.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-pd-csi-driver.addons.k8s.io\n name: csi-gce-pd-snapshotter-role\nrules:\n- apiGroups:\n - \"\"\n resources:\n - events\n verbs:\n - list\n - watch\n - create\n - update\n - patch\n- apiGroups:\n - snapshot.storage.k8s.io\n resources:\n - volumesnapshotclasses\n verbs:\n - get\n - list\n - watch\n- apiGroups:\n - snapshot.storage.k8s.io\n resources:\n - volumesnapshotcontents\n verbs:\n - create\n - get\n - list\n - watch\n - update\n - delete\n - patch\n- apiGroups:\n - snapshot.storage.k8s.io\n resources:\n - volumesnapshotcontents/status\n verbs:\n - update\n - patch\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-pd-csi-driver.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-pd-csi-driver.addons.k8s.io\n k8s-app: gcp-compute-persistent-disk-csi-driver\n name: csi-gce-pd-controller-leaderelection-binding\n namespace: gce-pd-csi-driver\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: Role\n name: csi-gce-pd-leaderelection-role\nsubjects:\n- kind: ServiceAccount\n name: csi-gce-pd-controller-sa\n namespace: gce-pd-csi-driver\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-pd-csi-driver.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-pd-csi-driver.addons.k8s.io\n name: csi-gce-pd-controller\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: csi-gce-pd-node-deploy\nsubjects:\n- kind: ServiceAccount\n name: csi-gce-pd-controller-sa\n namespace: gce-pd-csi-driver\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-pd-csi-driver.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-pd-csi-driver.addons.k8s.io\n name: csi-gce-pd-controller-attacher-binding\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: csi-gce-pd-attacher-role\nsubjects:\n- kind: ServiceAccount\n name: csi-gce-pd-controller-sa\n namespace: gce-pd-csi-driver\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-pd-csi-driver.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-pd-csi-driver.addons.k8s.io\n name: csi-gce-pd-controller-deploy\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: csi-gce-pd-controller-deploy\nsubjects:\n- kind: ServiceAccount\n name: csi-gce-pd-controller-sa\n namespace: gce-pd-csi-driver\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-pd-csi-driver.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-pd-csi-driver.addons.k8s.io\n name: csi-gce-pd-controller-provisioner-binding\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: csi-gce-pd-provisioner-role\nsubjects:\n- kind: ServiceAccount\n name: csi-gce-pd-controller-sa\n namespace: gce-pd-csi-driver\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-pd-csi-driver.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-pd-csi-driver.addons.k8s.io\n name: csi-gce-pd-controller-snapshotter-binding\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: csi-gce-pd-snapshotter-role\nsubjects:\n- kind: ServiceAccount\n name: csi-gce-pd-controller-sa\n namespace: gce-pd-csi-driver\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-pd-csi-driver.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-pd-csi-driver.addons.k8s.io\n name: csi-gce-pd-node\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: csi-gce-pd-node-deploy\nsubjects:\n- kind: ServiceAccount\n name: csi-gce-pd-node-sa\n namespace: gce-pd-csi-driver\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-pd-csi-driver.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-pd-csi-driver.addons.k8s.io\n name: csi-gce-pd-resizer-binding\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: csi-gce-pd-resizer-role\nsubjects:\n- kind: ServiceAccount\n name: csi-gce-pd-controller-sa\n namespace: gce-pd-csi-driver\n\n---\n\napiVersion: scheduling.k8s.io/v1\ndescription: This priority class should be used for the GCE PD CSI driver controller\n deployment only.\nglobalDefault: false\nkind: PriorityClass\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-pd-csi-driver.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-pd-csi-driver.addons.k8s.io\n name: csi-gce-pd-controller\nvalue: 900000000\n\n---\n\napiVersion: scheduling.k8s.io/v1\ndescription: This priority class should be used for the GCE PD CSI driver node deployment\n only.\nglobalDefault: false\nkind: PriorityClass\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-pd-csi-driver.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-pd-csi-driver.addons.k8s.io\n name: csi-gce-pd-node\nvalue: 900001000\n\n---\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-pd-csi-driver.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-pd-csi-driver.addons.k8s.io\n name: csi-gce-pd-controller\n namespace: gce-pd-csi-driver\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: gcp-compute-persistent-disk-csi-driver\n template:\n metadata:\n creationTimestamp: null\n labels:\n app: gcp-compute-persistent-disk-csi-driver\n kops.k8s.io/managed-by: kops\n spec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n - key: kubernetes.io/os\n operator: In\n values:\n - linux\n - matchExpressions:\n - key: node-role.kubernetes.io/master\n operator: Exists\n - key: kubernetes.io/os\n operator: In\n values:\n - linux\n containers:\n - args:\n - --v=5\n - --endpoint=unix:/csi/csi.sock\n - --extra-labels=k8s-io-cluster-name=wanderingwires-k8s-local\n env: []\n image: registry.k8s.io/cloud-provider-gcp/gcp-compute-persistent-disk-csi-driver:v1.10.1@sha256:bc753f567edd1fa3b4de82e75c106629fb2af7815bb67d81a580777619246fc4\n name: gce-pd-driver\n volumeMounts:\n - mountPath: /csi\n name: socket-dir\n - args:\n - --v=5\n - --csi-address=/csi/csi.sock\n - --feature-gates=Topology=true\n - --http-endpoint=:22011\n - --leader-election-namespace=$(PDCSI_NAMESPACE)\n - --timeout=250s\n - --extra-create-metadata\n - --leader-election\n - --default-fstype=ext4\n - --controller-publish-readonly=true\n env:\n - name: PDCSI_NAMESPACE\n valueFrom:\n fieldRef:\n fieldPath: metadata.namespace\n image: registry.k8s.io/sig-storage/csi-provisioner:v3.4.0@sha256:e468dddcd275163a042ab297b2d8c2aca50d5e148d2d22f3b6ba119e2f31fa79\n livenessProbe:\n failureThreshold: 1\n httpGet:\n path: /healthz/leader-election\n port: http-endpoint\n initialDelaySeconds: 10\n periodSeconds: 20\n timeoutSeconds: 10\n name: csi-provisioner\n ports:\n - containerPort: 22011\n name: http-endpoint\n protocol: TCP\n volumeMounts:\n - mountPath: /csi\n name: socket-dir\n - args:\n - --v=5\n - --csi-address=/csi/csi.sock\n - --http-endpoint=:22012\n - --leader-election\n - --leader-election-namespace=$(PDCSI_NAMESPACE)\n - --timeout=250s\n env:\n - name: PDCSI_NAMESPACE\n valueFrom:\n fieldRef:\n fieldPath: metadata.namespace\n image: registry.k8s.io/sig-storage/csi-attacher:v4.2.0@sha256:34cf9b32736c6624fc9787fb149ea6e0fbeb45415707ac2f6440ac960f1116e6\n livenessProbe:\n failureThreshold: 1\n httpGet:\n path: /healthz/leader-election\n port: http-endpoint\n initialDelaySeconds: 10\n periodSeconds: 20\n timeoutSeconds: 10\n name: csi-attacher\n ports:\n - containerPort: 22012\n name: http-endpoint\n protocol: TCP\n volumeMounts:\n - mountPath: /csi\n name: socket-dir\n - args:\n - --v=5\n - --csi-address=/csi/csi.sock\n - --http-endpoint=:22013\n - --leader-election\n - --leader-election-namespace=$(PDCSI_NAMESPACE)\n - --handle-volume-inuse-error=false\n env:\n - name: PDCSI_NAMESPACE\n valueFrom:\n fieldRef:\n fieldPath: metadata.namespace\n image: registry.k8s.io/sig-storage/csi-resizer:v1.7.0@sha256:3a7bdf5d105783d05d0962fa06ca53032b01694556e633f27366201c2881e01d\n livenessProbe:\n failureThreshold: 1\n httpGet:\n path: /healthz/leader-election\n port: http-endpoint\n initialDelaySeconds: 10\n periodSeconds: 20\n timeoutSeconds: 10\n name: csi-resizer\n ports:\n - containerPort: 22013\n name: http-endpoint\n protocol: TCP\n volumeMounts:\n - mountPath: /csi\n name: socket-dir\n - args:\n - --v=5\n - --csi-address=/csi/csi.sock\n - --metrics-address=:22014\n - --leader-election\n - --leader-election-namespace=$(PDCSI_NAMESPACE)\n - --timeout=300s\n env:\n - name: PDCSI_NAMESPACE\n valueFrom:\n fieldRef:\n fieldPath: metadata.namespace\n image: registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f\n name: csi-snapshotter\n volumeMounts:\n - mountPath: /csi\n name: socket-dir\n hostNetwork: true\n nodeSelector: null\n priorityClassName: csi-gce-pd-controller\n serviceAccountName: csi-gce-pd-controller-sa\n tolerations:\n - effect: NoSchedule\n operator: Exists\n - key: CriticalAddonsOnly\n operator: Exists\n volumes:\n - emptyDir: {}\n name: socket-dir\n\n---\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-pd-csi-driver.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-pd-csi-driver.addons.k8s.io\n name: csi-gce-pd-node\n namespace: gce-pd-csi-driver\nspec:\n selector:\n matchLabels:\n app: gcp-compute-persistent-disk-csi-driver\n template:\n metadata:\n creationTimestamp: null\n labels:\n app: gcp-compute-persistent-disk-csi-driver\n kops.k8s.io/managed-by: kops\n spec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/os\n operator: In\n values:\n - linux\n containers:\n - args:\n - --v=5\n - --csi-address=/csi/csi.sock\n - --kubelet-registration-path=/var/lib/kubelet/plugins/pd.csi.storage.gke.io/csi.sock\n env:\n - name: KUBE_NODE_NAME\n valueFrom:\n fieldRef:\n fieldPath: spec.nodeName\n image: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.7.0@sha256:4a4cae5118c4404e35d66059346b7fa0835d7e6319ff45ed73f4bba335cf5183\n name: csi-driver-registrar\n volumeMounts:\n - mountPath: /csi\n name: plugin-dir\n - mountPath: /registration\n name: registration-dir\n - args:\n - --v=5\n - --endpoint=unix:/csi/csi.sock\n - --run-controller-service=false\n image: registry.k8s.io/cloud-provider-gcp/gcp-compute-persistent-disk-csi-driver:v1.10.1@sha256:bc753f567edd1fa3b4de82e75c106629fb2af7815bb67d81a580777619246fc4\n name: gce-pd-driver\n securityContext:\n privileged: true\n volumeMounts:\n - mountPath: /var/lib/kubelet\n mountPropagation: Bidirectional\n name: kubelet-dir\n - mountPath: /csi\n name: plugin-dir\n - mountPath: /dev\n name: device-dir\n - mountPath: /etc/udev\n name: udev-rules-etc\n - mountPath: /lib/udev\n name: udev-rules-lib\n - mountPath: /run/udev\n name: udev-socket\n - mountPath: /sys\n name: sys\n hostNetwork: true\n nodeSelector: null\n priorityClassName: csi-gce-pd-node\n serviceAccountName: csi-gce-pd-node-sa\n tolerations:\n - operator: Exists\n volumes:\n - hostPath:\n path: /var/lib/kubelet/plugins_registry/\n type: Directory\n name: registration-dir\n - hostPath:\n path: /var/lib/kubelet\n type: Directory\n name: kubelet-dir\n - hostPath:\n path: /var/lib/kubelet/plugins/pd.csi.storage.gke.io/\n type: DirectoryOrCreate\n name: plugin-dir\n - hostPath:\n path: /dev\n type: Directory\n name: device-dir\n - hostPath:\n path: /etc/udev\n type: Directory\n name: udev-rules-etc\n - hostPath:\n path: /lib/udev\n type: Directory\n name: udev-rules-lib\n - hostPath:\n path: /run/udev\n type: Directory\n name: udev-socket\n - hostPath:\n path: /sys\n type: Directory\n name: sys\n\n---\n\napiVersion: storage.k8s.io/v1\nkind: CSIDriver\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-pd-csi-driver.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-pd-csi-driver.addons.k8s.io\n name: pd.csi.storage.gke.io\nspec:\n attachRequired: true\n podInfoOnMount: false","PublicACL":null}
I0128 07:45:24.578349 54184 executor.go:192] Executing task "Keypair/etcd-peers-ca-main": *fitasks.Keypair {"Name":"etcd-peers-ca-main","alternateNames":null,"Lifecycle":"Sync","Signer":null,"subject":"cn=etcd-peers-ca-main","issuer":"","type":"ca","oldFormat":false}
I0128 07:45:24.578018 54184 executor.go:192] Executing task "Secret/kube-proxy": *fitasks.Secret {"Name":"kube-proxy","Lifecycle":"Sync"}
I0128 07:45:24.579227 54184 executor.go:192] Executing task "ManagedFile/etcd-cluster-spec-events": *fitasks.ManagedFile {"Name":"etcd-cluster-spec-events","Lifecycle":"Sync","Base":"gs://wanderingwires-clusters/wanderingwires.k8s.local/backups/etcd/events","Location":"/control/etcd-cluster-spec","Contents":"{\n \"memberCount\": 1,\n \"etcdVersion\": \"3.5.9\"\n}","PublicACL":null}
I0128 07:45:24.578738 54184 executor.go:192] Executing task "ManagedFile/manifests-static-kube-apiserver-healthcheck": *fitasks.ManagedFile {"Name":"manifests-static-kube-apiserver-healthcheck","Lifecycle":"Sync","Base":null,"Location":"manifests/static/kube-apiserver-healthcheck.yaml","Contents":"apiVersion: v1\nkind: Pod\nmetadata:\n creationTimestamp: null\nspec:\n containers:\n - args:\n - --ca-cert=/secrets/ca.crt\n - --client-cert=/secrets/client.crt\n - --client-key=/secrets/client.key\n image: registry.k8s.io/kops/kube-apiserver-healthcheck:1.28.3@sha256:f83dbddc066533685d209907e3dbedd24df93a1e0abddefa1e5c2beb6bc113cd\n livenessProbe:\n httpGet:\n host: 127.0.0.1\n path: /.kube-apiserver-healthcheck/healthz\n port: 3990\n initialDelaySeconds: 5\n timeoutSeconds: 5\n name: healthcheck\n resources: {}\n securityContext:\n runAsNonRoot: true\n runAsUser: 10012\n volumeMounts:\n - mountPath: /secrets\n name: healthcheck-secrets\n readOnly: true\n volumes:\n - hostPath:\n path: /etc/kubernetes/kube-apiserver-healthcheck/secrets\n type: Directory\n name: healthcheck-secrets\nstatus: {}\n","PublicACL":null}
I0128 07:45:24.578607 54184 executor.go:192] Executing task "ManagedFile/kops-version.txt": *fitasks.ManagedFile {"Name":"kops-version.txt","Lifecycle":"Sync","Base":"gs://wanderingwires-clusters/wanderingwires.k8s.local","Location":"kops-version.txt","Contents":"1.28.3","PublicACL":null}
I0128 07:45:24.578484 54184 executor.go:192] Executing task "ManagedFile/wanderingwires.k8s.local-addons-kubelet-api.rbac.addons.k8s.io-k8s-1.9": *fitasks.ManagedFile {"Name":"wanderingwires.k8s.local-addons-kubelet-api.rbac.addons.k8s.io-k8s-1.9","Lifecycle":"Sync","Base":null,"Location":"addons/kubelet-api.rbac.addons.k8s.io/k8s-1.9.yaml","Contents":"apiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: kubelet-api.rbac.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: kubelet-api.rbac.addons.k8s.io\n name: kops:system:kubelet-api-admin\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: system:kubelet-api-admin\nsubjects:\n- apiGroup: rbac.authorization.k8s.io\n kind: User\n name: kubelet-api","PublicACL":null}
I0128 07:45:24.578529 54184 executor.go:192] Executing task "ManagedFile/cluster-completed.spec": *fitasks.ManagedFile {"Name":"cluster-completed.spec","Lifecycle":"Sync","Base":"gs://wanderingwires-clusters/wanderingwires.k8s.local","Location":"cluster-completed.spec","Contents":"apiVersion: kops.k8s.io/v1alpha2\nkind: Cluster\nmetadata:\n creationTimestamp: \"2024-01-28T15:32:06Z\"\n name: wanderingwires.k8s.local\nspec:\n api:\n loadBalancer:\n type: Public\n authorization:\n rbac: {}\n channel: stable\n cloudConfig:\n gcpPDCSIDriver:\n enabled: true\n manageStorageClasses: true\n multizone: true\n nodeTags: wanderingwires-k8s-local-k8s-io-role-node\n cloudControllerManager:\n allocateNodeCIDRs: true\n cidrAllocatorType: CloudAllocator\n clusterCIDR: 100.96.0.0/11\n clusterName: wanderingwires-k8s-local\n image: k8scloudprovidergcp/cloud-controller-manager:latest\n leaderElection:\n leaderElect: true\n cloudProvider: gce\n clusterDNSDomain: cluster.local\n configBase: gs://wanderingwires-clusters/wanderingwires.k8s.local\n containerRuntime: containerd\n containerd:\n logLevel: info\n runc:\n version: 1.1.9\n version: 1.7.7\n docker:\n skipInstall: true\n etcdClusters:\n - backups:\n backupStore: gs://wanderingwires-clusters/wanderingwires.k8s.local/backups/etcd/main\n cpuRequest: 200m\n etcdMembers:\n - instanceGroup: control-plane-us-west2-a\n name: a\n manager:\n backupRetentionDays: 90\n memoryRequest: 100Mi\n name: main\n version: 3.5.9\n - backups:\n backupStore: gs://wanderingwires-clusters/wanderingwires.k8s.local/backups/etcd/events\n cpuRequest: 100m\n etcdMembers:\n - instanceGroup: control-plane-us-west2-a\n name: a\n manager:\n backupRetentionDays: 90\n memoryRequest: 100Mi\n name: events\n version: 3.5.9\n externalDns:\n provider: dns-controller\n iam:\n allowContainerRegistry: true\n legacy: false\n keyStore: gs://wanderingwires-clusters/wanderingwires.k8s.local/pki\n kubeAPIServer:\n allowPrivileged: true\n anonymousAuth: false\n apiAudiences:\n - kubernetes.svc.default\n apiServerCount: 1\n authorizationMode: Node,RBAC\n bindAddress: 0.0.0.0\n cloudProvider: external\n enableAdmissionPlugins:\n - NamespaceLifecycle\n - LimitRanger\n - ServiceAccount\n - DefaultStorageClass\n - DefaultTolerationSeconds\n - MutatingAdmissionWebhook\n - ValidatingAdmissionWebhook\n - NodeRestriction\n - ResourceQuota\n etcdServers:\n - https://127.0.0.1:4001\n etcdServersOverrides:\n - /events#https://127.0.0.1:4002\n image: registry.k8s.io/kube-apiserver:v1.28.5@sha256:4bb6f46baa98052399ee2270d5912edb97d4f8602ea2e2700f0527a887228112\n kubeletPreferredAddressTypes:\n - InternalIP\n - Hostname\n - ExternalIP\n logLevel: 2\n requestheaderAllowedNames:\n - aggregator\n requestheaderExtraHeaderPrefixes:\n - X-Remote-Extra-\n requestheaderGroupHeaders:\n - X-Remote-Group\n requestheaderUsernameHeaders:\n - X-Remote-User\n securePort: 443\n serviceAccountIssuer: https://api.internal.wanderingwires.k8s.local\n serviceAccountJWKSURI: https://api.internal.wanderingwires.k8s.local/openid/v1/jwks\n serviceClusterIPRange: 100.64.0.0/13\n storageBackend: etcd3\n kubeControllerManager:\n allocateNodeCIDRs: true\n attachDetachReconcileSyncPeriod: 1m0s\n cloudProvider: external\n clusterCIDR: 100.96.0.0/11\n clusterName: wanderingwires.k8s.local\n configureCloudRoutes: false\n image: registry.k8s.io/kube-controller-manager:v1.28.5@sha256:6e8c9171f74a4e3fadedce8f865f58092a597650a709702f2b122f6ca3b6cd32\n leaderElection:\n leaderElect: true\n logLevel: 2\n useServiceAccountCredentials: true\n kubeDNS:\n cacheMaxConcurrent: 150\n cacheMaxSize: 1000\n cpuRequest: 100m\n domain: cluster.local\n memoryLimit: 170Mi\n memoryRequest: 70Mi\n nodeLocalDNS:\n cpuRequest: 25m\n enabled: false\n image: registry.k8s.io/dns/k8s-dns-node-cache:1.22.20\n memoryRequest: 5Mi\n provider: CoreDNS\n serverIP: 100.64.0.10\n kubeProxy:\n clusterCIDR: 100.96.0.0/11\n cpuRequest: 100m\n enabled: false\n image: registry.k8s.io/kube-proxy:v1.28.5@sha256:516b53b4cd8f3ce2e9cbf0f30152d4a06d1eec57c881dde97cfaf2d802e132dc\n logLevel: 2\n kubeScheduler:\n image: registry.k8s.io/kube-scheduler:v1.28.5@sha256:9a48e33e454c904cf53c144a8d0ae3f5a4c2c5bd086b8953ad7d5a637b9aa007\n leaderElection:\n leaderElect: true\n logLevel: 2\n kubelet:\n anonymousAuth: false\n cgroupDriver: systemd\n cgroupRoot: /\n cloudProvider: external\n clusterDNS: 100.64.0.10\n clusterDomain: cluster.local\n enableDebuggingHandlers: true\n evictionHard: memory.available\u003c100Mi,nodefs.available\u003c10%,nodefs.inodesFree\u003c5%,imagefs.available\u003c10%,imagefs.inodesFree\u003c5%\n hairpinMode: promiscuous-bridge\n kubeconfigPath: /var/lib/kubelet/kubeconfig\n logLevel: 2\n podInfraContainerImage: registry.k8s.io/pause:3.9@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\n podManifestPath: /etc/kubernetes/manifests\n protectKernelDefaults: true\n registerSchedulable: true\n shutdownGracePeriod: 30s\n shutdownGracePeriodCriticalPods: 10s\n kubernetesApiAccess:\n - 0.0.0.0/0\n - ::/0\n kubernetesVersion: 1.28.5\n masterKubelet:\n anonymousAuth: false\n cgroupDriver: systemd\n cgroupRoot: /\n cloudProvider: external\n clusterDNS: 100.64.0.10\n clusterDomain: cluster.local\n enableDebuggingHandlers: true\n evictionHard: memory.available\u003c100Mi,nodefs.available\u003c10%,nodefs.inodesFree\u003c5%,imagefs.available\u003c10%,imagefs.inodesFree\u003c5%\n hairpinMode: promiscuous-bridge\n kubeconfigPath: /var/lib/kubelet/kubeconfig\n logLevel: 2\n podInfraContainerImage: registry.k8s.io/pause:3.9@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\n podManifestPath: /etc/kubernetes/manifests\n protectKernelDefaults: true\n registerSchedulable: true\n shutdownGracePeriod: 30s\n shutdownGracePeriodCriticalPods: 10s\n networking:\n cilium:\n agentPrometheusPort: 9090\n bpfCTGlobalAnyMax: 262144\n bpfCTGlobalTCPMax: 524288\n bpfLBAlgorithm: random\n bpfLBMaglevTableSize: \"16381\"\n bpfLBMapMax: 65536\n bpfNATGlobalMax: 524288\n bpfNeighGlobalMax: 524288\n bpfPolicyMapMax: 16384\n clusterName: default\n cpuRequest: 25m\n disableCNPStatusUpdates: true\n disableMasquerade: false\n enableBPFMasquerade: false\n enableEndpointHealthChecking: true\n enableL7Proxy: true\n enableNodePort: true\n enableRemoteNodeIdentity: true\n enableUnreachableRoutes: false\n hubble:\n enabled: false\n identityAllocationMode: crd\n identityChangeGracePeriod: 5s\n ipam: kubernetes\n memoryRequest: 128Mi\n monitorAggregation: medium\n sidecarIstioProxyImage: cilium/istio_proxy\n toFqdnsDnsRejectResponseCode: refused\n tunnel: vxlan\n version: v1.13.10\n nonMasqueradeCIDR: 100.64.0.0/10\n podCIDR: 100.96.0.0/11\n project: wanderingwires\n secretStore: gs://wanderingwires-clusters/wanderingwires.k8s.local/secrets\n serviceClusterIPRange: 100.64.0.0/13\n sshAccess:\n - 0.0.0.0/0\n - ::/0\n subnets:\n - cidr: 10.0.32.0/20\n name: us-west2\n region: us-west2\n type: Public\n topology:\n dns:\n type: Private\n","PublicACL":null}
I0128 07:45:24.578629 54184 executor.go:192] Executing task "ManagedFile/wanderingwires.k8s.local-addons-gcp-cloud-controller.addons.k8s.io-k8s-1.23": *fitasks.ManagedFile {"Name":"wanderingwires.k8s.local-addons-gcp-cloud-controller.addons.k8s.io-k8s-1.23","Lifecycle":"Sync","Base":null,"Location":"addons/gcp-cloud-controller.addons.k8s.io/k8s-1.23.yaml","Contents":"apiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-cloud-controller.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n component: cloud-controller-manager\n k8s-addon: gcp-cloud-controller.addons.k8s.io\n name: cloud-controller-manager\n namespace: kube-system\nspec:\n selector:\n matchLabels:\n component: cloud-controller-manager\n template:\n metadata:\n creationTimestamp: null\n labels:\n component: cloud-controller-manager\n kops.k8s.io/managed-by: kops\n tier: control-plane\n spec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n - matchExpressions:\n - key: node-role.kubernetes.io/master\n operator: Exists\n containers:\n - args:\n - --allocate-node-cidrs=true\n - --cidr-allocator-type=CloudAllocator\n - --cluster-cidr=100.96.0.0/11\n - --cluster-name=wanderingwires-k8s-local\n - --leader-elect=true\n - --v=2\n - --cloud-provider=gce\n - --use-service-account-credentials=true\n - --cloud-config=/etc/kubernetes/cloud.config\n command:\n - /usr/local/bin/cloud-controller-manager\n env:\n - name: KUBERNETES_SERVICE_HOST\n value: 127.0.0.1\n image: k8scloudprovidergcp/cloud-controller-manager:latest@sha256:881fd1095937638040723973ade90e6700f1c831a78fb585a3227c4d021b0df9\n imagePullPolicy: IfNotPresent\n livenessProbe:\n failureThreshold: 3\n httpGet:\n host: 127.0.0.1\n path: /healthz\n port: 10258\n scheme: HTTPS\n initialDelaySeconds: 15\n periodSeconds: 10\n successThreshold: 1\n timeoutSeconds: 15\n name: cloud-controller-manager\n resources:\n requests:\n cpu: 200m\n volumeMounts:\n - mountPath: /etc/kubernetes/cloud.config\n name: cloudconfig\n readOnly: true\n hostNetwork: true\n nodeSelector: null\n priorityClassName: system-cluster-critical\n serviceAccountName: cloud-controller-manager\n tolerations:\n - effect: NoSchedule\n key: node.cloudprovider.kubernetes.io/uninitialized\n value: \"true\"\n - effect: NoSchedule\n key: node.kubernetes.io/not-ready\n - effect: NoSchedule\n key: node-role.kubernetes.io/master\n - effect: NoSchedule\n key: node-role.kubernetes.io/control-plane\n volumes:\n - hostPath:\n path: /etc/kubernetes/cloud.config\n type: \"\"\n name: cloudconfig\n updateStrategy:\n type: RollingUpdate\n\n---\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-cloud-controller.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-cloud-controller.addons.k8s.io\n name: cloud-controller-manager\n namespace: kube-system\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-cloud-controller.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-cloud-controller.addons.k8s.io\n name: cloud-controller-manager:apiserver-authentication-reader\n namespace: kube-system\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: Role\n name: extension-apiserver-authentication-reader\nsubjects:\n- apiGroup: \"\"\n kind: ServiceAccount\n name: cloud-controller-manager\n namespace: kube-system\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-cloud-controller.addons.k8s.io\n addonmanager.kubernetes.io/mode: Reconcile\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-cloud-controller.addons.k8s.io\n name: system:cloud-controller-manager\nrules:\n- apiGroups:\n - \"\"\n - events.k8s.io\n resources:\n - events\n verbs:\n - create\n - patch\n - update\n- apiGroups:\n - coordination.k8s.io\n resources:\n - leases\n verbs:\n - create\n - get\n - list\n - watch\n - update\n- apiGroups:\n - coordination.k8s.io\n resourceNames:\n - cloud-controller-manager\n resources:\n - leases\n verbs:\n - get\n - update\n- apiGroups:\n - \"\"\n resources:\n - endpoints\n - serviceaccounts\n verbs:\n - create\n - get\n - update\n- apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - get\n - update\n - patch\n- apiGroups:\n - \"\"\n resources:\n - namespaces\n verbs:\n - get\n- apiGroups:\n - \"\"\n resources:\n - nodes/status\n verbs:\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - secrets\n verbs:\n - create\n - delete\n - get\n - update\n- apiGroups:\n - authentication.k8s.io\n resources:\n - tokenreviews\n verbs:\n - create\n- apiGroups:\n - '*'\n resources:\n - '*'\n verbs:\n - list\n - watch\n- apiGroups:\n - \"\"\n resources:\n - serviceaccounts/token\n verbs:\n - create\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-cloud-controller.addons.k8s.io\n addonmanager.kubernetes.io/mode: Reconcile\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-cloud-controller.addons.k8s.io\n name: system::leader-locking-cloud-controller-manager\n namespace: kube-system\nrules:\n- apiGroups:\n - \"\"\n resources:\n - configmaps\n verbs:\n - watch\n- apiGroups:\n - \"\"\n resourceNames:\n - cloud-controller-manager\n resources:\n - configmaps\n verbs:\n - get\n - update\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-cloud-controller.addons.k8s.io\n addonmanager.kubernetes.io/mode: Reconcile\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-cloud-controller.addons.k8s.io\n name: system:controller:cloud-node-controller\nrules:\n- apiGroups:\n - \"\"\n resources:\n - events\n verbs:\n - create\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - get\n - list\n - update\n - delete\n - patch\n- apiGroups:\n - \"\"\n resources:\n - nodes/status\n verbs:\n - get\n - list\n - update\n - delete\n - patch\n- apiGroups:\n - \"\"\n resources:\n - pods\n verbs:\n - list\n - delete\n- apiGroups:\n - \"\"\n resources:\n - pods/status\n verbs:\n - list\n - delete\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-cloud-controller.addons.k8s.io\n addonmanager.kubernetes.io/mode: Reconcile\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-cloud-controller.addons.k8s.io\n name: system::leader-locking-cloud-controller-manager\n namespace: kube-system\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: Role\n name: system::leader-locking-cloud-controller-manager\nsubjects:\n- kind: ServiceAccount\n name: cloud-controller-manager\n namespace: kube-system\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-cloud-controller.addons.k8s.io\n addonmanager.kubernetes.io/mode: Reconcile\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-cloud-controller.addons.k8s.io\n name: system:cloud-controller-manager\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: system:cloud-controller-manager\nsubjects:\n- apiGroup: \"\"\n kind: ServiceAccount\n name: cloud-controller-manager\n namespace: kube-system\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-cloud-controller.addons.k8s.io\n addonmanager.kubernetes.io/mode: Reconcile\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-cloud-controller.addons.k8s.io\n name: system:controller:cloud-node-controller\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: system:controller:cloud-node-controller\nsubjects:\n- kind: ServiceAccount\n name: cloud-node-controller\n namespace: kube-system\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: gcp-cloud-controller.addons.k8s.io\n addonmanager.kubernetes.io/mode: Reconcile\n app.kubernetes.io/managed-by: kops\n k8s-addon: gcp-cloud-controller.addons.k8s.io\n name: system:controller:pvl-controller\nrules:\n- apiGroups:\n - \"\"\n resources:\n - events\n verbs:\n - create\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - persistentvolumeclaims\n - persistentvolumes\n verbs:\n - list\n - watch","PublicACL":null}
I0128 07:45:24.579179 54184 executor.go:192] Executing task "ManagedFile/wanderingwires.k8s.local-addons-limit-range.addons.k8s.io": *fitasks.ManagedFile {"Name":"wanderingwires.k8s.local-addons-limit-range.addons.k8s.io","Lifecycle":"Sync","Base":null,"Location":"addons/limit-range.addons.k8s.io/v1.5.0.yaml","Contents":"apiVersion: v1\nkind: LimitRange\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: limit-range.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: limit-range.addons.k8s.io\n name: limits\n namespace: default\nspec:\n limits:\n - defaultRequest:\n cpu: 100m\n type: Container","PublicACL":null}
I0128 07:45:24.578353 54184 executor.go:192] Executing task "ManagedFile/wanderingwires.k8s.local-addons-storage-gce.addons.k8s.io-v1.7.0": *fitasks.ManagedFile {"Name":"wanderingwires.k8s.local-addons-storage-gce.addons.k8s.io-v1.7.0","Lifecycle":"Sync","Base":null,"Location":"addons/storage-gce.addons.k8s.io/v1.7.0.yaml","Contents":"apiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: storage-gce.addons.k8s.io\n addonmanager.kubernetes.io/mode: EnsureExists\n app.kubernetes.io/managed-by: kops\n k8s-addon: storage-gce.addons.k8s.io\n kubernetes.io/cluster-service: \"true\"\n name: standard\nparameters:\n type: pd-standard\nprovisioner: kubernetes.io/gce-pd","PublicACL":null}
I0128 07:45:24.578331 54184 executor.go:192] Executing task "ManagedFile/etcd-cluster-spec-main": *fitasks.ManagedFile {"Name":"etcd-cluster-spec-main","Lifecycle":"Sync","Base":"gs://wanderingwires-clusters/wanderingwires.k8s.local/backups/etcd/main","Location":"/control/etcd-cluster-spec","Contents":"{\n \"memberCount\": 1,\n \"etcdVersion\": \"3.5.9\"\n}","PublicACL":null}
I0128 07:45:24.578411 54184 executor.go:192] Executing task "ManagedFile/wanderingwires.k8s.local-addons-bootstrap": *fitasks.ManagedFile {"Name":"wanderingwires.k8s.local-addons-bootstrap","Lifecycle":"Sync","Base":null,"Location":"addons/bootstrap-channel.yaml","Contents":"kind: Addons\nmetadata:\n creationTimestamp: null\n name: bootstrap\nspec:\n addons:\n - id: k8s-1.16\n manifest: kops-controller.addons.k8s.io/k8s-1.16.yaml\n manifestHash: eb14757f9c01ec68952bc9d7461bbcbd7163654b27ad9d0d4b704fa7ecccf786\n name: kops-controller.addons.k8s.io\n needsRollingUpdate: control-plane\n selector:\n k8s-addon: kops-controller.addons.k8s.io\n version: 9.99.0\n - id: k8s-1.12\n manifest: coredns.addons.k8s.io/k8s-1.12.yaml\n manifestHash: f8a4fe0a224e815f5468dace52e181fff39aed2d9a31e0c88fdf78edb1aeb8a4\n name: coredns.addons.k8s.io\n selector:\n k8s-addon: coredns.addons.k8s.io\n version: 9.99.0\n - id: k8s-1.9\n manifest: kubelet-api.rbac.addons.k8s.io/k8s-1.9.yaml\n manifestHash: 01c120e887bd98d82ef57983ad58a0b22bc85efb48108092a24c4b82e4c9ea81\n name: kubelet-api.rbac.addons.k8s.io\n selector:\n k8s-addon: kubelet-api.rbac.addons.k8s.io\n version: 9.99.0\n - manifest: limit-range.addons.k8s.io/v1.5.0.yaml\n manifestHash: 2d55c3bc5e354e84a3730a65b42f39aba630a59dc8d32b30859fcce3d3178bc2\n name: limit-range.addons.k8s.io\n selector:\n k8s-addon: limit-range.addons.k8s.io\n version: 9.99.0\n - id: k8s-1.12\n manifest: dns-controller.addons.k8s.io/k8s-1.12.yaml\n manifestHash: b9627d258d449a8b0a06402ed87f8d12478020cfc8b5164c781af4d3f82fc162\n name: dns-controller.addons.k8s.io\n selector:\n k8s-addon: dns-controller.addons.k8s.io\n version: 9.99.0\n - id: v1.7.0\n manifest: storage-gce.addons.k8s.io/v1.7.0.yaml\n manifestHash: 6c6d100b10243fc62e0195706aa862b42632faeac05a117d07a263a2c5a8e87c\n name: storage-gce.addons.k8s.io\n selector:\n k8s-addon: storage-gce.addons.k8s.io\n version: 9.99.0\n - id: k8s-1.23\n manifest: gcp-pd-csi-driver.addons.k8s.io/k8s-1.23.yaml\n manifestHash: b0465ba563641ce5b3c09e0cd697f15dda990b79e63a835bff416babfb376498\n name: gcp-pd-csi-driver.addons.k8s.io\n selector:\n k8s-addon: gcp-pd-csi-driver.addons.k8s.io\n version: 9.99.0\n - id: v0.1.12\n manifest: metadata-proxy.addons.k8s.io/v0.1.12.yaml\n manifestHash: 8aea79f4523ac1b40fbe07289f35efb50a0adab7907e59cf3baadf05345ac2a8\n name: metadata-proxy.addons.k8s.io\n selector:\n k8s-addon: metadata-proxy.addons.k8s.io\n version: 9.99.0\n - id: k8s-1.23\n manifest: gcp-cloud-controller.addons.k8s.io/k8s-1.23.yaml\n manifestHash: b35a342509ba7e2f30fd6768f9f72cc0bf9a62ec5892569f19c5e281d3f6ef9b\n name: gcp-cloud-controller.addons.k8s.io\n prune:\n kinds:\n - kind: ConfigMap\n labelSelector: addon.kops.k8s.io/name=gcp-cloud-controller.addons.k8s.io,app.kubernetes.io/managed-by=kops\n - kind: Service\n labelSelector: addon.kops.k8s.io/name=gcp-cloud-controller.addons.k8s.io,app.kubernetes.io/managed-by=kops\n - kind: ServiceAccount\n labelSelector: addon.kops.k8s.io/name=gcp-cloud-controller.addons.k8s.io,app.kubernetes.io/managed-by=kops\n namespaces:\n - kube-system\n - group: admissionregistration.k8s.io\n kind: MutatingWebhookConfiguration\n labelSelector: addon.kops.k8s.io/name=gcp-cloud-controller.addons.k8s.io,app.kubernetes.io/managed-by=kops\n - group: admissionregistration.k8s.io\n kind: ValidatingWebhookConfiguration\n labelSelector: addon.kops.k8s.io/name=gcp-cloud-controller.addons.k8s.io,app.kubernetes.io/managed-by=kops\n - group: apps\n kind: DaemonSet\n labelSelector: addon.kops.k8s.io/name=gcp-cloud-controller.addons.k8s.io,app.kubernetes.io/managed-by=kops\n namespaces:\n - kube-system\n - group: apps\n kind: Deployment\n labelSelector: addon.kops.k8s.io/name=gcp-cloud-controller.addons.k8s.io,app.kubernetes.io/managed-by=kops\n - group: apps\n kind: StatefulSet\n labelSelector: addon.kops.k8s.io/name=gcp-cloud-controller.addons.k8s.io,app.kubernetes.io/managed-by=kops\n - group: policy\n kind: PodDisruptionBudget\n labelSelector: addon.kops.k8s.io/name=gcp-cloud-controller.addons.k8s.io,app.kubernetes.io/managed-by=kops\n - group: rbac.authorization.k8s.io\n kind: ClusterRole\n labelSelector: addon.kops.k8s.io/name=gcp-cloud-controller.addons.k8s.io,app.kubernetes.io/managed-by=kops\n - group: rbac.authorization.k8s.io\n kind: ClusterRoleBinding\n labelSelector: addon.kops.k8s.io/name=gcp-cloud-controller.addons.k8s.io,app.kubernetes.io/managed-by=kops\n - group: rbac.authorization.k8s.io\n kind: Role\n labelSelector: addon.kops.k8s.io/name=gcp-cloud-controller.addons.k8s.io,app.kubernetes.io/managed-by=kops\n namespaces:\n - kube-system\n - group: rbac.authorization.k8s.io\n kind: RoleBinding\n labelSelector: addon.kops.k8s.io/name=gcp-cloud-controller.addons.k8s.io,app.kubernetes.io/managed-by=kops\n namespaces:\n - kube-system\n selector:\n k8s-addon: gcp-cloud-controller.addons.k8s.io\n version: 9.99.0\n - id: k8s-1.16\n manifest: networking.cilium.io/k8s-1.16-v1.13.yaml\n manifestHash: 23027652fd48f64284b729cd30759dd3d5d899dd7caa6bb6ef3529a846e4cd4e\n name: networking.cilium.io\n needsRollingUpdate: all\n selector:\n role.kubernetes.io/networking: \"1\"\n version: 9.99.0\n","PublicACL":null}
I0128 07:45:24.578365 54184 executor.go:192] Executing task "ManagedFile/wanderingwires.k8s.local-addons-coredns.addons.k8s.io-k8s-1.12": *fitasks.ManagedFile {"Name":"wanderingwires.k8s.local-addons-coredns.addons.k8s.io-k8s-1.12","Lifecycle":"Sync","Base":null,"Location":"addons/coredns.addons.k8s.io/k8s-1.12.yaml","Contents":"apiVersion: v1\nkind: ServiceAccount\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: coredns.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: coredns.addons.k8s.io\n kubernetes.io/cluster-service: \"true\"\n name: coredns\n namespace: kube-system\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: coredns.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: coredns.addons.k8s.io\n kubernetes.io/bootstrapping: rbac-defaults\n name: system:coredns\nrules:\n- apiGroups:\n - \"\"\n resources:\n - endpoints\n - services\n - pods\n - namespaces\n verbs:\n - list\n - watch\n- apiGroups:\n - discovery.k8s.io\n resources:\n - endpointslices\n verbs:\n - list\n - watch\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n annotations:\n rbac.authorization.kubernetes.io/autoupdate: \"true\"\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: coredns.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: coredns.addons.k8s.io\n kubernetes.io/bootstrapping: rbac-defaults\n name: system:coredns\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: system:coredns\nsubjects:\n- kind: ServiceAccount\n name: coredns\n namespace: kube-system\n\n---\n\napiVersion: v1\ndata:\n Corefile: |-\n .:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local. in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n hosts /rootfs/etc/hosts k8s.local {\n ttl 30\n fallthrough\n }\n prometheus :9153\n forward . /etc/resolv.conf {\n max_concurrent 1000\n }\n cache 30\n loop\n reload\n loadbalance\n }\nkind: ConfigMap\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: coredns.addons.k8s.io\n addonmanager.kubernetes.io/mode: EnsureExists\n app.kubernetes.io/managed-by: kops\n k8s-addon: coredns.addons.k8s.io\n name: coredns\n namespace: kube-system\n\n---\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: coredns.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: coredns.addons.k8s.io\n k8s-app: kube-dns\n kubernetes.io/cluster-service: \"true\"\n kubernetes.io/name: CoreDNS\n name: coredns\n namespace: kube-system\nspec:\n selector:\n matchLabels:\n k8s-app: kube-dns\n strategy:\n rollingUpdate:\n maxSurge: 10%\n maxUnavailable: 1\n type: RollingUpdate\n template:\n metadata:\n creationTimestamp: null\n labels:\n k8s-app: kube-dns\n kops.k8s.io/managed-by: kops\n spec:\n containers:\n - args:\n - -conf\n - /etc/coredns/Corefile\n image: registry.k8s.io/coredns/coredns:v1.10.1@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\n imagePullPolicy: IfNotPresent\n livenessProbe:\n failureThreshold: 5\n httpGet:\n path: /health\n port: 8080\n scheme: HTTP\n initialDelaySeconds: 60\n successThreshold: 1\n timeoutSeconds: 5\n name: coredns\n ports:\n - containerPort: 53\n name: dns\n protocol: UDP\n - containerPort: 53\n name: dns-tcp\n protocol: TCP\n - containerPort: 9153\n name: metrics\n protocol: TCP\n readinessProbe:\n httpGet:\n path: /ready\n port: 8181\n scheme: HTTP\n resources:\n limits:\n memory: 170Mi\n requests:\n cpu: 100m\n memory: 70Mi\n securityContext:\n allowPrivilegeEscalation: false\n capabilities:\n add:\n - NET_BIND_SERVICE\n drop:\n - all\n readOnlyRootFilesystem: true\n volumeMounts:\n - mountPath: /etc/coredns\n name: config-volume\n readOnly: true\n - mountPath: /rootfs/etc/hosts\n name: etc-hosts\n readOnly: true\n dnsPolicy: Default\n nodeSelector:\n kubernetes.io/os: linux\n priorityClassName: system-cluster-critical\n serviceAccountName: coredns\n tolerations:\n - key: CriticalAddonsOnly\n operator: Exists\n topologySpreadConstraints:\n - labelSelector:\n matchLabels:\n k8s-app: kube-dns\n maxSkew: 1\n topologyKey: topology.kubernetes.io/zone\n whenUnsatisfiable: ScheduleAnyway\n - labelSelector:\n matchLabels:\n k8s-app: kube-dns\n maxSkew: 1\n topologyKey: kubernetes.io/hostname\n whenUnsatisfiable: ScheduleAnyway\n volumes:\n - configMap:\n name: coredns\n name: config-volume\n - hostPath:\n path: /etc/hosts\n type: File\n name: etc-hosts\n\n---\n\napiVersion: v1\nkind: Service\nmetadata:\n annotations:\n prometheus.io/port: \"9153\"\n prometheus.io/scrape: \"true\"\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: coredns.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: coredns.addons.k8s.io\n k8s-app: kube-dns\n kubernetes.io/cluster-service: \"true\"\n kubernetes.io/name: CoreDNS\n name: kube-dns\n namespace: kube-system\n resourceVersion: \"0\"\nspec:\n clusterIP: 100.64.0.10\n ports:\n - name: dns\n port: 53\n protocol: UDP\n - name: dns-tcp\n port: 53\n protocol: TCP\n - name: metrics\n port: 9153\n protocol: TCP\n selector:\n k8s-app: kube-dns\n\n---\n\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: coredns.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: coredns.addons.k8s.io\n name: kube-dns\n namespace: kube-system\nspec:\n maxUnavailable: 50%\n selector:\n matchLabels:\n k8s-app: kube-dns\n\n---\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: coredns.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: coredns.addons.k8s.io\n name: coredns-autoscaler\n namespace: kube-system\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: coredns.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: coredns.addons.k8s.io\n name: coredns-autoscaler\nrules:\n- apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - list\n - watch\n- apiGroups:\n - \"\"\n resources:\n - replicationcontrollers/scale\n verbs:\n - get\n - update\n- apiGroups:\n - extensions\n - apps\n resources:\n - deployments/scale\n - replicasets/scale\n verbs:\n - get\n - update\n- apiGroups:\n - \"\"\n resources:\n - configmaps\n verbs:\n - get\n - create\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: coredns.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: coredns.addons.k8s.io\n name: coredns-autoscaler\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: coredns-autoscaler\nsubjects:\n- kind: ServiceAccount\n name: coredns-autoscaler\n namespace: kube-system\n\n---\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: coredns.addons.k8s.io\n app.kubernetes.io/managed-by: kops\n k8s-addon: coredns.addons.k8s.io\n k8s-app: coredns-autoscaler\n kubernetes.io/cluster-service: \"true\"\n name: coredns-autoscaler\n namespace: kube-system\nspec:\n selector:\n matchLabels:\n k8s-app: coredns-autoscaler\n template:\n metadata:\n creationTimestamp: null\n labels:\n k8s-app: coredns-autoscaler\n kops.k8s.io/managed-by: kops\n spec:\n containers:\n - command:\n - /cluster-proportional-autoscaler\n - --namespace=kube-system\n - --configmap=coredns-autoscaler\n - --target=Deployment/coredns\n - --default-params={\"linear\":{\"coresPerReplica\":256,\"nodesPerReplica\":16,\"preventSinglePointFailure\":true}}\n - --logtostderr=true\n - --v=2\n image: registry.k8s.io/cpa/cluster-proportional-autoscaler:v1.8.8@sha256:69bf675e356770c651864305f2ce17ec25623ac0ff77a040f9396e72daba2d5f\n name: autoscaler\n resources:\n requests:\n cpu: 20m\n memory: 10Mi\n nodeSelector:\n kubernetes.io/os: linux\n priorityClassName: system-cluster-critical\n serviceAccountName: coredns-autoscaler\n tolerations:\n - key: CriticalAddonsOnly\n operator: Exists","PublicACL":null}
I0128 07:45:24.578508 54184 executor.go:192] Executing task "ManagedFile/wanderingwires.k8s.local-addons-metadata-proxy.addons.k8s.io-v0.1.12": *fitasks.ManagedFile {"Name":"wanderingwires.k8s.local-addons-metadata-proxy.addons.k8s.io-v0.1.12","Lifecycle":"Sync","Base":null,"Location":"addons/metadata-proxy.addons.k8s.io/v0.1.12.yaml","Contents":"apiVersion: v1\nkind: ServiceAccount\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: metadata-proxy.addons.k8s.io\n addonmanager.kubernetes.io/mode: Reconcile\n app.kubernetes.io/managed-by: kops\n k8s-addon: metadata-proxy.addons.k8s.io\n k8s-app: metadata-proxy\n kubernetes.io/cluster-service: \"true\"\n name: metadata-proxy\n namespace: kube-system\n\n---\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: metadata-proxy.addons.k8s.io\n addonmanager.kubernetes.io/mode: Reconcile\n app.kubernetes.io/managed-by: kops\n k8s-addon: metadata-proxy.addons.k8s.io\n k8s-app: metadata-proxy\n kubernetes.io/cluster-service: \"true\"\n version: v0.12\n name: metadata-proxy-v0.12\n namespace: kube-system\nspec:\n selector:\n matchLabels:\n k8s-app: metadata-proxy\n version: v0.12\n template:\n metadata:\n creationTimestamp: null\n labels:\n k8s-app: metadata-proxy\n kops.k8s.io/managed-by: kops\n kubernetes.io/cluster-service: \"true\"\n version: v0.12\n spec:\n containers:\n - args:\n - -addr=169.254.169.252:988\n image: registry.k8s.io/metadata-proxy:v0.1.12@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a\n name: metadata-proxy\n resources:\n limits:\n cpu: 30m\n memory: 25Mi\n requests:\n cpu: 30m\n memory: 25Mi\n securityContext:\n privileged: true\n - command:\n - /monitor\n - --stackdriver-prefix=custom.googleapis.com/addons\n - --source=metadata_proxy:http://127.0.0.1:989?whitelisted=request_count\n - --pod-id=$(POD_NAME)\n - --namespace-id=$(POD_NAMESPACE)\n env:\n - name: POD_NAME\n valueFrom:\n fieldRef:\n fieldPath: metadata.name\n - name: POD_NAMESPACE\n valueFrom:\n fieldRef:\n fieldPath: metadata.namespace\n image: registry.k8s.io/prometheus-to-sd:v0.5.0@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e\n name: prometheus-to-sd-exporter\n resources:\n limits:\n cpu: 2m\n memory: 20Mi\n requests:\n cpu: 2m\n memory: 20Mi\n dnsPolicy: Default\n hostNetwork: true\n initContainers:\n - command:\n - /bin/sh\n - -c\n - |\n set -e\n set -x\n\n if (ip link show ens4); then\n PRIMARY_DEV=ens4\n else\n PRIMARY_DEV=eth0\n fi\n\n ip addr add dev lo 169.254.169.252/32\n iptables -w -t nat -I PREROUTING -p tcp -d 169.254.169.254 ! -i \"${PRIMARY_DEV}\" --dport 80 -m comment --comment \"metadata-concealment: bridge traffic to metadata server goes to metadata proxy\" -j DNAT --to-destination 169.254.169.252:988\n iptables -w -t nat -I PREROUTING -p tcp -d 169.254.169.254 ! -i \"${PRIMARY_DEV}\" --dport 8080 -m comment --comment \"metadata-concealment: bridge traffic to metadata server goes to metadata proxy\" -j DNAT --to-destination 169.254.169.252:987\n image: registry.k8s.io/k8s-custom-iptables:1.0@sha256:8b1a0831e88973e2937eae3458edb470f20d54bf80d88b6a3355f36266e16ca5\n imagePullPolicy: Always\n name: update-ipdtables\n securityContext:\n privileged: true\n volumeMounts:\n - mountPath: /host\n name: host\n nodeSelector:\n cloud.google.com/metadata-proxy-ready: \"true\"\n kubernetes.io/os: linux\n priorityClassName: system-node-critical\n serviceAccountName: metadata-proxy\n terminationGracePeriodSeconds: 30\n tolerations:\n - effect: NoExecute\n operator: Exists\n - effect: NoSchedule\n operator: Exists\n volumes:\n - hostPath:\n path: /\n type: Directory\n name: host\n updateStrategy:\n type: RollingUpdate","PublicACL":null}
I0128 07:45:24.584476 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/pki/private/apiserver-aggregator-ca/keyset.yaml"
I0128 07:45:24.584580 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/manifests/etcd/events-control-plane-us-west2-a.yaml"
I0128 07:45:24.584597 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/pki/private/kubernetes-ca/keyset.yaml"
I0128 07:45:24.584666 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/pki/private/etcd-clients-ca/keyset.yaml"
I0128 07:45:24.584682 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/addons/dns-controller.addons.k8s.io/k8s-1.12.yaml"
I0128 07:45:24.584697 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/pki/private/etcd-peers-ca-events/keyset.yaml"
I0128 07:45:24.584714 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/pki/private/etcd-manager-ca-main/keyset.yaml"
I0128 07:45:24.579251 54184 executor.go:192] Executing task "ManagedFile/wanderingwires.k8s.local-addons-networking.cilium.io-k8s-1.16": *fitasks.ManagedFile {"Name":"wanderingwires.k8s.local-addons-networking.cilium.io-k8s-1.16","Lifecycle":"Sync","Base":null,"Location":"addons/networking.cilium.io/k8s-1.16-v1.13.yaml","Contents":"apiVersion: v1\nkind: ServiceAccount\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: networking.cilium.io\n app.kubernetes.io/managed-by: kops\n role.kubernetes.io/networking: \"1\"\n name: cilium\n namespace: kube-system\n\n---\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: networking.cilium.io\n app.kubernetes.io/managed-by: kops\n role.kubernetes.io/networking: \"1\"\n name: cilium-operator\n namespace: kube-system\n\n---\n\napiVersion: v1\ndata:\n agent-health-port: \"9879\"\n auto-direct-node-routes: \"false\"\n bpf-ct-global-any-max: \"262144\"\n bpf-ct-global-tcp-max: \"524288\"\n bpf-lb-algorithm: random\n bpf-lb-maglev-table-size: \"16381\"\n bpf-lb-map-max: \"65536\"\n bpf-lb-sock-hostns-only: \"false\"\n bpf-nat-global-max: \"524288\"\n bpf-neigh-global-max: \"524288\"\n bpf-policy-map-max: \"16384\"\n cgroup-root: /run/cilium/cgroupv2\n cluster-name: default\n debug: \"false\"\n disable-cnp-status-updates: \"true\"\n disable-endpoint-crd: \"false\"\n enable-bpf-masquerade: \"false\"\n enable-endpoint-health-checking: \"true\"\n enable-ipv4: \"true\"\n enable-ipv4-masquerade: \"true\"\n enable-ipv6: \"false\"\n enable-ipv6-masquerade: \"false\"\n enable-l7-proxy: \"true\"\n enable-node-port: \"true\"\n enable-remote-node-identity: \"true\"\n enable-service-topology: \"false\"\n enable-unreachable-routes: \"false\"\n identity-allocation-mode: crd\n identity-change-grace-period: 5s\n install-iptables-rules: \"true\"\n ipam: kubernetes\n kube-proxy-replacement: strict\n monitor-aggregation: medium\n nodes-gc-interval: 5m0s\n preallocate-bpf-maps: \"false\"\n sidecar-istio-proxy-image: cilium/istio_proxy\n tofqdns-dns-reject-response-code: refused\n tofqdns-enable-poller: \"false\"\n tunnel: vxlan\nkind: ConfigMap\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: networking.cilium.io\n app.kubernetes.io/managed-by: kops\n role.kubernetes.io/networking: \"1\"\n name: cilium-config\n namespace: kube-system\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: networking.cilium.io\n app.kubernetes.io/managed-by: kops\n role.kubernetes.io/networking: \"1\"\n name: cilium\nrules:\n- apiGroups:\n - networking.k8s.io\n resources:\n - networkpolicies\n verbs:\n - get\n - list\n - watch\n- apiGroups:\n - discovery.k8s.io\n resources:\n - endpointslices\n verbs:\n - get\n - list\n - watch\n- apiGroups:\n - \"\"\n resources:\n - namespaces\n - services\n - pods\n - endpoints\n - nodes\n verbs:\n - get\n - list\n - watch\n- apiGroups:\n - apiextensions.k8s.io\n resources:\n - customresourcedefinitions\n verbs:\n - list\n - watch\n - get\n- apiGroups:\n - cilium.io\n resources:\n - ciliumloadbalancerippools\n - ciliumbgppeeringpolicies\n - ciliumclusterwideenvoyconfigs\n - ciliumclusterwidenetworkpolicies\n - ciliumegressgatewaypolicies\n - ciliumegressnatpolicies\n - ciliumendpoints\n - ciliumendpointslices\n - ciliumenvoyconfigs\n - ciliumidentities\n - ciliumlocalredirectpolicies\n - ciliumnetworkpolicies\n - ciliumnodes\n verbs:\n - list\n - watch\n- apiGroups:\n - cilium.io\n resources:\n - ciliumidentities\n - ciliumendpoints\n - ciliumnodes\n verbs:\n - create\n- apiGroups:\n - cilium.io\n resources:\n - ciliumidentities\n verbs:\n - update\n- apiGroups:\n - cilium.io\n resources:\n - ciliumendpoints\n verbs:\n - delete\n - get\n- apiGroups:\n - cilium.io\n resources:\n - ciliumnodes\n - ciliumnodes/status\n verbs:\n - get\n - update\n- apiGroups:\n - cilium.io\n resources:\n - ciliumnetworkpolicies/status\n - ciliumclusterwidenetworkpolicies/status\n - ciliumendpoints/status\n - ciliumendpoints\n verbs:\n - patch\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: networking.cilium.io\n app.kubernetes.io/managed-by: kops\n role.kubernetes.io/networking: \"1\"\n name: cilium-operator\nrules:\n- apiGroups:\n - \"\"\n resources:\n - pods\n verbs:\n - get\n - list\n - watch\n- apiGroups:\n - discovery.k8s.io\n resources:\n - endpointslices\n verbs:\n - get\n - list\n - watch\n- apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - list\n - watch\n- apiGroups:\n - \"\"\n resources:\n - services\n - endpoints\n - namespaces\n verbs:\n - get\n - list\n - watch\n- apiGroups:\n - cilium.io\n resources:\n - ciliumnetworkpolicies\n - ciliumclusterwidenetworkpolicies\n verbs:\n - create\n - update\n - deletecollection\n - patch\n - get\n - list\n - watch\n- apiGroups:\n - cilium.io\n resources:\n - ciliumnetworkpolicies/status\n - ciliumclusterwidenetworkpolicies/status\n verbs:\n - patch\n - update\n- apiGroups:\n - cilium.io\n resources:\n - ciliumendpoints\n - ciliumidentities\n verbs:\n - delete\n - list\n - watch\n- apiGroups:\n - cilium.io\n resources:\n - ciliumidentities\n verbs:\n - update\n- apiGroups:\n - cilium.io\n resources:\n - ciliumnodes\n verbs:\n - create\n - update\n - get\n - list\n - watch\n - delete\n- apiGroups:\n - cilium.io\n resources:\n - ciliumnodes/status\n verbs:\n - update\n- apiGroups:\n - cilium.io\n resources:\n - ciliumendpointslices\n - ciliumenvoyconfigs\n verbs:\n - create\n - update\n - get\n - list\n - watch\n - delete\n - patch\n- apiGroups:\n - apiextensions.k8s.io\n resources:\n - customresourcedefinitions\n verbs:\n - create\n - get\n - list\n - watch\n- apiGroups:\n - apiextensions.k8s.io\n resourceNames:\n - ciliumloadbalancerippools.cilium.io\n - ciliumbgppeeringpolicies.cilium.io\n - ciliumclusterwideenvoyconfigs.cilium.io\n - ciliumclusterwidenetworkpolicies.cilium.io\n - ciliumegressgatewaypolicies.cilium.io\n - ciliumegressnatpolicies.cilium.io\n - ciliumendpoints.cilium.io\n - ciliumendpointslices.cilium.io\n - ciliumenvoyconfigs.cilium.io\n - ciliumexternalworkloads.cilium.io\n - ciliumidentities.cilium.io\n - ciliumlocalredirectpolicies.cilium.io\n - ciliumnetworkpolicies.cilium.io\n - ciliumnodes.cilium.io\n resources:\n - customresourcedefinitions\n verbs:\n - update\n- apiGroups:\n - cilium.io\n resources:\n - ciliumloadbalancerippools\n verbs:\n - get\n - list\n - watch\n- apiGroups:\n - cilium.io\n resources:\n - ciliumloadbalancerippools/status\n verbs:\n - patch\n- apiGroups:\n - coordination.k8s.io\n resources:\n - leases\n verbs:\n - create\n - get\n - update\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: networking.cilium.io\n app.kubernetes.io/managed-by: kops\n role.kubernetes.io/networking: \"1\"\n name: cilium\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: cilium\nsubjects:\n- kind: ServiceAccount\n name: cilium\n namespace: kube-system\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: networking.cilium.io\n app.kubernetes.io/managed-by: kops\n role.kubernetes.io/networking: \"1\"\n name: cilium-operator\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: cilium-operator\nsubjects:\n- kind: ServiceAccount\n name: cilium-operator\n namespace: kube-system\n\n---\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: networking.cilium.io\n app.kubernetes.io/managed-by: kops\n k8s-app: cilium\n kubernetes.io/cluster-service: \"true\"\n role.kubernetes.io/networking: \"1\"\n name: cilium\n namespace: kube-system\nspec:\n selector:\n matchLabels:\n k8s-app: cilium\n kubernetes.io/cluster-service: \"true\"\n template:\n metadata:\n creationTimestamp: null\n labels:\n k8s-app: cilium\n kops.k8s.io/managed-by: kops\n kubernetes.io/cluster-service: \"true\"\n spec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/os\n operator: In\n values:\n - linux\n containers:\n - args:\n - --config-dir=/tmp/cilium/config-map\n command:\n - cilium-agent\n env:\n - name: K8S_NODE_NAME\n valueFrom:\n fieldRef:\n apiVersion: v1\n fieldPath: spec.nodeName\n - name: CILIUM_K8S_NAMESPACE\n valueFrom:\n fieldRef:\n apiVersion: v1\n fieldPath: metadata.namespace\n - name: CILIUM_CLUSTERMESH_CONFIG\n value: /var/lib/cilium/clustermesh/\n - name: CILIUM_CNI_CHAINING_MODE\n valueFrom:\n configMapKeyRef:\n key: cni-chaining-mode\n name: cilium-config\n optional: true\n - name: CILIUM_CUSTOM_CNI_CONF\n valueFrom:\n configMapKeyRef:\n key: custom-cni-conf\n name: cilium-config\n optional: true\n - name: KUBERNETES_SERVICE_HOST\n value: api.internal.wanderingwires.k8s.local\n - name: KUBERNETES_SERVICE_PORT\n value: \"443\"\n image: quay.io/cilium/cilium:v1.13.10@sha256:aff23a5d4daf06623d10f7ff6a8a7b617c22c1f8d63f507fd33ef3a80ba4779f\n imagePullPolicy: IfNotPresent\n lifecycle:\n postStart:\n exec:\n command:\n - /cni-install.sh\n - --cni-exclusive=true\n preStop:\n exec:\n command:\n - /cni-uninstall.sh\n livenessProbe:\n failureThreshold: 10\n httpGet:\n host: 127.0.0.1\n httpHeaders:\n - name: brief\n value: \"true\"\n path: /healthz\n port: 9879\n scheme: HTTP\n periodSeconds: 30\n successThreshold: 1\n timeoutSeconds: 5\n name: cilium-agent\n readinessProbe:\n failureThreshold: 3\n httpGet:\n host: 127.0.0.1\n httpHeaders:\n - name: brief\n value: \"true\"\n path: /healthz\n port: 9879\n scheme: HTTP\n initialDelaySeconds: 5\n periodSeconds: 30\n successThreshold: 1\n timeoutSeconds: 5\n resources:\n requests:\n cpu: 25m\n memory: 128Mi\n securityContext:\n privileged: true\n startupProbe:\n failureThreshold: 105\n httpGet:\n host: 127.0.0.1\n httpHeaders:\n - name: brief\n value: \"true\"\n path: /healthz\n port: 9879\n scheme: HTTP\n periodSeconds: 2\n successThreshold: null\n terminationMessagePolicy: FallbackToLogsOnError\n volumeMounts:\n - mountPath: /sys/fs/bpf\n mountPropagation: Bidirectional\n name: bpf-maps\n - mountPath: /var/run/cilium\n name: cilium-run\n - mountPath: /host/etc/cni/net.d\n name: etc-cni-netd\n - mountPath: /var/lib/cilium/clustermesh\n name: clustermesh-secrets\n readOnly: true\n - mountPath: /tmp/cilium/config-map\n name: cilium-config-path\n readOnly: true\n - mountPath: /lib/modules\n name: lib-modules\n readOnly: true\n - mountPath: /run/xtables.lock\n name: xtables-lock\n hostNetwork: true\n initContainers:\n - command:\n - /install-plugin.sh\n image: quay.io/cilium/cilium:v1.13.10@sha256:aff23a5d4daf06623d10f7ff6a8a7b617c22c1f8d63f507fd33ef3a80ba4779f\n imagePullPolicy: IfNotPresent\n name: install-cni-binaries\n resources:\n requests:\n cpu: 100m\n memory: 10Mi\n securityContext:\n capabilities:\n drop:\n - ALL\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: FallbackToLogsOnError\n volumeMounts:\n - mountPath: /host/opt/cni/bin\n name: cni-path\n - command:\n - /init-container.sh\n env:\n - name: CILIUM_ALL_STATE\n valueFrom:\n configMapKeyRef:\n key: clean-cilium-state\n name: cilium-config\n optional: true\n - name: CILIUM_BPF_STATE\n valueFrom:\n configMapKeyRef:\n key: clean-cilium-bpf-state\n name: cilium-config\n optional: true\n image: quay.io/cilium/cilium:v1.13.10@sha256:aff23a5d4daf06623d10f7ff6a8a7b617c22c1f8d63f507fd33ef3a80ba4779f\n imagePullPolicy: IfNotPresent\n name: clean-cilium-state\n resources:\n limits:\n memory: 100Mi\n requests:\n cpu: 100m\n memory: 100Mi\n securityContext:\n privileged: true\n terminationMessagePolicy: FallbackToLogsOnError\n volumeMounts:\n - mountPath: /sys/fs/bpf\n name: bpf-maps\n - mountPath: /run/cilium/cgroupv2\n mountPropagation: HostToContainer\n name: cilium-cgroup\n - mountPath: /var/run/cilium\n name: cilium-run\n priorityClassName: system-node-critical\n restartPolicy: Always\n serviceAccount: cilium\n serviceAccountName: cilium\n terminationGracePeriodSeconds: 1\n tolerations:\n - operator: Exists\n volumes:\n - hostPath:\n path: /var/run/cilium\n type: DirectoryOrCreate\n name: cilium-run\n - hostPath:\n path: /sys/fs/bpf\n type: DirectoryOrCreate\n name: bpf-maps\n - hostPath:\n path: /opt/cni/bin\n type: DirectoryOrCreate\n name: cni-path\n - hostPath:\n path: /run/cilium/cgroupv2\n type: Directory\n name: cilium-cgroup\n - hostPath:\n path: /etc/cni/net.d\n type: DirectoryOrCreate\n name: etc-cni-netd\n - hostPath:\n path: /lib/modules\n name: lib-modules\n - hostPath:\n path: /run/xtables.lock\n type: FileOrCreate\n name: xtables-lock\n - name: clustermesh-secrets\n secret:\n defaultMode: 420\n optional: true\n secretName: cilium-clustermesh\n - configMap:\n name: cilium-config\n name: cilium-config-path\n updateStrategy:\n type: OnDelete\n\n---\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: networking.cilium.io\n app.kubernetes.io/managed-by: kops\n io.cilium/app: operator\n name: cilium-operator\n role.kubernetes.io/networking: \"1\"\n name: cilium-operator\n namespace: kube-system\nspec:\n replicas: 1\n selector:\n matchLabels:\n io.cilium/app: operator\n name: cilium-operator\n strategy:\n rollingUpdate:\n maxSurge: 1\n maxUnavailable: 1\n type: RollingUpdate\n template:\n metadata:\n creationTimestamp: null\n labels:\n io.cilium/app: operator\n kops.k8s.io/managed-by: kops\n name: cilium-operator\n spec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n - matchExpressions:\n - key: node-role.kubernetes.io/master\n operator: Exists\n containers:\n - args:\n - --config-dir=/tmp/cilium/config-map\n - --debug=$(CILIUM_DEBUG)\n - --eni-tags=KubernetesCluster=wanderingwires.k8s.local\n command:\n - cilium-operator\n env:\n - name: K8S_NODE_NAME\n valueFrom:\n fieldRef:\n apiVersion: v1\n fieldPath: spec.nodeName\n - name: CILIUM_K8S_NAMESPACE\n valueFrom:\n fieldRef:\n apiVersion: v1\n fieldPath: metadata.namespace\n - name: CILIUM_DEBUG\n valueFrom:\n configMapKeyRef:\n key: debug\n name: cilium-config\n optional: true\n - name: KUBERNETES_SERVICE_HOST\n value: api.internal.wanderingwires.k8s.local\n - name: KUBERNETES_SERVICE_PORT\n value: \"443\"\n image: quay.io/cilium/operator:v1.13.10@sha256:1b51da9c4fa162f2efe470a3e8a0128ebbdb48821d7dd818216b063b89e06705\n imagePullPolicy: IfNotPresent\n livenessProbe:\n httpGet:\n host: 127.0.0.1\n path: /healthz\n port: 9234\n scheme: HTTP\n initialDelaySeconds: 60\n periodSeconds: 10\n timeoutSeconds: 3\n name: cilium-operator\n resources:\n requests:\n cpu: 25m\n memory: 128Mi\n terminationMessagePolicy: FallbackToLogsOnError\n volumeMounts:\n - mountPath: /tmp/cilium/config-map\n name: cilium-config-path\n readOnly: true\n hostNetwork: true\n nodeSelector: null\n priorityClassName: system-cluster-critical\n restartPolicy: Always\n serviceAccount: cilium-operator\n serviceAccountName: cilium-operator\n tolerations:\n - operator: Exists\n topologySpreadConstraints:\n - labelSelector:\n matchLabels:\n io.cilium/app: operator\n name: cilium-operator\n maxSkew: 1\n topologyKey: topology.kubernetes.io/zone\n whenUnsatisfiable: ScheduleAnyway\n - labelSelector:\n matchLabels:\n io.cilium/app: operator\n name: cilium-operator\n maxSkew: 1\n topologyKey: kubernetes.io/hostname\n whenUnsatisfiable: DoNotSchedule\n volumes:\n - configMap:\n name: cilium-config\n name: cilium-config-path\n\n---\n\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n creationTimestamp: null\n labels:\n addon.kops.k8s.io/name: networking.cilium.io\n app.kubernetes.io/managed-by: kops\n io.cilium/app: operator\n name: cilium-operator\n role.kubernetes.io/networking: \"1\"\n name: cilium-operator\n namespace: kube-system\nspec:\n maxUnavailable: 1\n selector:\n matchLabels:\n io.cilium/app: operator\n name: cilium-operator","PublicACL":null}
I0128 07:45:24.585663 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/secrets/kube-proxy"
I0128 07:45:24.585663 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/pki/private/etcd-peers-ca-main/keyset.yaml"
I0128 07:45:24.585663 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/addons/gcp-pd-csi-driver.addons.k8s.io/k8s-1.23.yaml"
I0128 07:45:24.585682 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/manifests/static/kube-apiserver-healthcheck.yaml"
I0128 07:45:24.585695 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/addons/kubelet-api.rbac.addons.k8s.io/k8s-1.9.yaml"
I0128 07:45:24.586004 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/cluster-completed.spec"
I0128 07:45:24.586988 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/addons/networking.cilium.io/k8s-1.16-v1.13.yaml"
I0128 07:45:24.586041 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/backups/etcd/events/control/etcd-cluster-spec"
I0128 07:45:24.586053 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/addons/storage-gce.addons.k8s.io/v1.7.0.yaml"
I0128 07:45:24.586075 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/kops-version.txt"
I0128 07:45:24.586078 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/addons/gcp-cloud-controller.addons.k8s.io/k8s-1.23.yaml"
I0128 07:45:24.586097 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/addons/limit-range.addons.k8s.io/v1.5.0.yaml"
I0128 07:45:24.586100 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/backups/etcd/main/control/etcd-cluster-spec"
I0128 07:45:24.586115 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/addons/bootstrap-channel.yaml"
I0128 07:45:24.586225 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/addons/metadata-proxy.addons.k8s.io/v0.1.12.yaml"
I0128 07:45:24.586244 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/addons/coredns.addons.k8s.io/k8s-1.12.yaml"
I0128 07:45:24.666079 54184 certificate.go:104] Parsing pem block: "CERTIFICATE"
I0128 07:45:24.666183 54184 privatekey.go:194] Parsing pem block: "RSA PRIVATE KEY"
I0128 07:45:24.666642 54184 changes.go:81] Field changed "Issuer" actual="cn=service-account" expected=""
I0128 07:45:24.700361 54184 certificate.go:104] Parsing pem block: "CERTIFICATE"
I0128 07:45:24.700443 54184 privatekey.go:194] Parsing pem block: "RSA PRIVATE KEY"
I0128 07:45:24.701018 54184 changes.go:81] Field changed "Issuer" actual="cn=etcd-peers-ca-main" expected=""
I0128 07:45:24.701997 54184 certificate.go:104] Parsing pem block: "CERTIFICATE"
I0128 07:45:24.702071 54184 privatekey.go:194] Parsing pem block: "RSA PRIVATE KEY"
I0128 07:45:24.702628 54184 changes.go:81] Field changed "Issuer" actual="cn=kubernetes-ca" expected=""
I0128 07:45:24.704981 54184 certificate.go:104] Parsing pem block: "CERTIFICATE"
I0128 07:45:24.705036 54184 privatekey.go:194] Parsing pem block: "RSA PRIVATE KEY"
I0128 07:45:24.705591 54184 changes.go:81] Field changed "Issuer" actual="cn=etcd-peers-ca-events" expected=""
I0128 07:45:24.713051 54184 certificate.go:104] Parsing pem block: "CERTIFICATE"
I0128 07:45:24.713128 54184 privatekey.go:194] Parsing pem block: "RSA PRIVATE KEY"
I0128 07:45:24.713689 54184 changes.go:81] Field changed "Issuer" actual="cn=etcd-manager-ca-events" expected=""
I0128 07:45:24.719108 54184 certificate.go:104] Parsing pem block: "CERTIFICATE"
I0128 07:45:24.719167 54184 privatekey.go:194] Parsing pem block: "RSA PRIVATE KEY"
I0128 07:45:24.719605 54184 changes.go:81] Field changed "Issuer" actual="cn=apiserver-aggregator-ca" expected=""
I0128 07:45:24.722362 54184 certificate.go:104] Parsing pem block: "CERTIFICATE"
I0128 07:45:24.722428 54184 privatekey.go:194] Parsing pem block: "RSA PRIVATE KEY"
I0128 07:45:24.722996 54184 changes.go:81] Field changed "Issuer" actual="cn=etcd-clients-ca" expected=""
I0128 07:45:24.741006 54184 certificate.go:104] Parsing pem block: "CERTIFICATE"
I0128 07:45:24.741046 54184 privatekey.go:194] Parsing pem block: "RSA PRIVATE KEY"
I0128 07:45:24.741254 54184 changes.go:81] Field changed "Issuer" actual="cn=etcd-manager-ca-main" expected=""
I0128 07:45:24.910034 54184 changes.go:154] comparing maps: k8s-io-cluster-name wanderingwires-k8s-local wanderingwires-k8s-local
I0128 07:45:24.910076 54184 changes.go:154] comparing maps: k8s-io-etcd-events a-2fa a-2fa
I0128 07:45:24.910093 54184 changes.go:154] comparing maps: k8s-io-role-master master master
I0128 07:45:24.910133 54184 changes.go:154] comparing maps: k8s-io-cluster-name wanderingwires-k8s-local wanderingwires-k8s-local
I0128 07:45:24.910177 54184 changes.go:154] comparing maps: k8s-io-etcd-main a-2fa a-2fa
I0128 07:45:24.910194 54184 changes.go:154] comparing maps: k8s-io-role-master master master
I0128 07:45:25.079267 54184 executor.go:111] Tasks: 45 done / 72 total; 19 can run
I0128 07:45:25.079348 54184 executor.go:192] Executing task "FirewallRule/master-to-master-wanderingwires-k8s-local": *gcetasks.FirewallRule {"Name":"master-to-master-wanderingwires-k8s-local","Family":"","Lifecycle":"Sync","Network":{"Name":"wanderingwires-k8s-local","Project":null,"Lifecycle":"Sync","Mode":"custom","CIDR":null,"Shared":false},"SourceTags":["wanderingwires-k8s-local-k8s-io-role-control-plane","wanderingwires-k8s-local-k8s-io-role-master"],"SourceRanges":null,"TargetTags":["wanderingwires-k8s-local-k8s-io-role-control-plane","wanderingwires-k8s-local-k8s-io-role-master"],"Allowed":["tcp","udp","icmp","esp","ah","sctp"],"Disabled":false}
I0128 07:45:25.079440 54184 executor.go:192] Executing task "BootstrapScript/control-plane-us-west2-a": &{control-plane-us-west2-a Sync 0xc000581900 0xc000dc3c00 0xc0008ccf90 {<nil> 0xc000bcfa40} [0xc000c5ab40] map[apiserver-aggregator-ca:0xc00164f780 etcd-clients-ca:0xc0009a7300 etcd-manager-ca-events:0xc0006e0d00 etcd-manager-ca-main:0xc0009a6f80 etcd-peers-ca-events:0xc0006e0d80 etcd-peers-ca-main:0xc0009a7000 kubernetes-ca:0xc00164f700 service-account:0xc00164f800] {<nil> 0xc000bcfa40}}
I0128 07:45:25.079452 54184 executor.go:192] Executing task "FirewallRule/nodeport-external-to-node-wanderingwires-k8s-local": *gcetasks.FirewallRule {"Name":"nodeport-external-to-node-wanderingwires-k8s-local","Family":"ipv4","Lifecycle":"Sync","Network":{"Name":"wanderingwires-k8s-local","Project":null,"Lifecycle":"Sync","Mode":"custom","CIDR":null,"Shared":false},"SourceTags":null,"SourceRanges":["0.0.0.0/0"],"TargetTags":["wanderingwires-k8s-local-k8s-io-role-node"],"Allowed":["tcp:30000-32767","udp:30000-32767"],"Disabled":true}
I0128 07:45:25.079499 54184 executor.go:192] Executing task "FirewallRule/https-api-wanderingwires-k8s-local": *gcetasks.FirewallRule {"Name":"https-api-wanderingwires-k8s-local","Family":"ipv4","Lifecycle":"Sync","Network":{"Name":"wanderingwires-k8s-local","Project":null,"Lifecycle":"Sync","Mode":"custom","CIDR":null,"Shared":false},"SourceTags":null,"SourceRanges":["0.0.0.0/0"],"TargetTags":["wanderingwires-k8s-local-k8s-io-role-control-plane"],"Allowed":["tcp:443"],"Disabled":false}
I0128 07:45:25.079557 54184 executor.go:192] Executing task "FirewallRule/master-to-node-wanderingwires-k8s-local": *gcetasks.FirewallRule {"Name":"master-to-node-wanderingwires-k8s-local","Family":"","Lifecycle":"Sync","Network":{"Name":"wanderingwires-k8s-local","Project":null,"Lifecycle":"Sync","Mode":"custom","CIDR":null,"Shared":false},"SourceTags":["wanderingwires-k8s-local-k8s-io-role-control-plane","wanderingwires-k8s-local-k8s-io-role-master"],"SourceRanges":null,"TargetTags":["wanderingwires-k8s-local-k8s-io-role-node"],"Allowed":["tcp","udp","icmp","esp","ah","sctp"],"Disabled":false}
I0128 07:45:25.079595 54184 executor.go:192] Executing task "Subnet/us-west2-wanderingwires-k8s-local": *gcetasks.Subnet {"Name":"us-west2-wanderingwires-k8s-local","Lifecycle":"Sync","Network":{"Name":"wanderingwires-k8s-local","Project":null,"Lifecycle":"Sync","Mode":"custom","CIDR":null,"Shared":false},"Region":"us-west2","CIDR":"10.0.32.0/20","StackType":"IPV4_ONLY","Ipv6AccessType":null,"SecondaryIpRanges":{},"Shared":false}
I0128 07:45:25.079632 54184 executor.go:192] Executing task "FirewallRule/ssh-external-to-master-wanderingwires-k8s-local": *gcetasks.FirewallRule {"Name":"ssh-external-to-master-wanderingwires-k8s-local","Family":"ipv4","Lifecycle":"Sync","Network":{"Name":"wanderingwires-k8s-local","Project":null,"Lifecycle":"Sync","Mode":"custom","CIDR":null,"Shared":false},"SourceTags":null,"SourceRanges":["0.0.0.0/0"],"TargetTags":["wanderingwires-k8s-local-k8s-io-role-control-plane","wanderingwires-k8s-local-k8s-io-role-master"],"Allowed":["tcp:22"],"Disabled":false}
I0128 07:45:25.079636 54184 executor.go:192] Executing task "BootstrapScript/nodes-us-west2-a": &{nodes-us-west2-a Sync 0xc000581900 0xc000f1e000 0xc0008ccf90 {<nil> 0xc000bcfae0} [0xc000c5ab40] map[etcd-clients-ca:0xc0009a7300 etcd-manager-ca-events:0xc0006e0d00 etcd-manager-ca-main:0xc0009a6f80 etcd-peers-ca-events:0xc0006e0d80 etcd-peers-ca-main:0xc0009a7000 kubernetes-ca:0xc00164f700] {<nil> 0xc000bcfae0}}
I0128 07:45:25.079851 54184 executor.go:192] Executing task "FirewallRule/https-api-ipv6-wanderingwires-k8s-local": *gcetasks.FirewallRule {"Name":"https-api-ipv6-wanderingwires-k8s-local","Family":"ipv6","Lifecycle":"Sync","Network":{"Name":"wanderingwires-k8s-local","Project":null,"Lifecycle":"Sync","Mode":"custom","CIDR":null,"Shared":false},"SourceTags":null,"SourceRanges":["::/0"],"TargetTags":["wanderingwires-k8s-local-k8s-io-role-control-plane"],"Allowed":["tcp:443"],"Disabled":false}
I0128 07:45:25.079854 54184 executor.go:192] Executing task "FirewallRule/lb-health-checks-wanderingwires-k8s-local": *gcetasks.FirewallRule {"Name":"lb-health-checks-wanderingwires-k8s-local","Family":"ipv4","Lifecycle":"Sync","Network":{"Name":"wanderingwires-k8s-local","Project":null,"Lifecycle":"Sync","Mode":"custom","CIDR":null,"Shared":false},"SourceTags":null,"SourceRanges":["35.191.0.0/16","130.211.0.0/22"],"TargetTags":["wanderingwires-k8s-local-k8s-io-role-control-plane"],"Allowed":["tcp"],"Disabled":false}
I0128 07:45:25.079821 54184 executor.go:192] Executing task "MirrorKeystore/mirror-keystore": *fitasks.MirrorKeystore {"Name":"mirror-keystore","Lifecycle":"Sync","MirrorPath":{}}
I0128 07:45:25.079388 54184 executor.go:192] Executing task "FirewallRule/ssh-external-to-master-ipv6-wanderingwires-k8s-local": *gcetasks.FirewallRule {"Name":"ssh-external-to-master-ipv6-wanderingwires-k8s-local","Family":"ipv6","Lifecycle":"Sync","Network":{"Name":"wanderingwires-k8s-local","Project":null,"Lifecycle":"Sync","Mode":"custom","CIDR":null,"Shared":false},"SourceTags":null,"SourceRanges":["::/0"],"TargetTags":["wanderingwires-k8s-local-k8s-io-role-control-plane","wanderingwires-k8s-local-k8s-io-role-master"],"Allowed":["tcp:22"],"Disabled":false}
I0128 07:45:25.079854 54184 executor.go:192] Executing task "MirrorSecrets/mirror-secrets": *fitasks.MirrorSecrets {"Name":"mirror-secrets","Lifecycle":"Sync","MirrorPath":{}}
I0128 07:45:25.079901 54184 executor.go:192] Executing task "TargetPool/api-wanderingwires-k8s-local": *gcetasks.TargetPool {"Name":"api-wanderingwires-k8s-local","HealthCheck":{"Name":"api-wanderingwires-k8s-local","Lifecycle":"Sync","SelfLink":"https://www.googleapis.com/compute/v1/projects/wanderingwires/global/httpHealthChecks/api-wanderingwires-k8s-local","Port":3990,"RequestPath":"/healthz"},"Lifecycle":"Sync"}
I0128 07:45:25.079980 54184 executor.go:192] Executing task "FirewallRule/nodeport-external-to-node-ipv6-wanderingwires-k8s-local": *gcetasks.FirewallRule {"Name":"nodeport-external-to-node-ipv6-wanderingwires-k8s-local","Family":"ipv6","Lifecycle":"Sync","Network":{"Name":"wanderingwires-k8s-local","Project":null,"Lifecycle":"Sync","Mode":"custom","CIDR":null,"Shared":false},"SourceTags":null,"SourceRanges":["::/0"],"TargetTags":["wanderingwires-k8s-local-k8s-io-role-node"],"Allowed":["tcp:30000-32767","udp:30000-32767"],"Disabled":true}
I0128 07:45:25.079416 54184 executor.go:192] Executing task "FirewallRule/node-to-node-wanderingwires-k8s-local": *gcetasks.FirewallRule {"Name":"node-to-node-wanderingwires-k8s-local","Family":"","Lifecycle":"Sync","Network":{"Name":"wanderingwires-k8s-local","Project":null,"Lifecycle":"Sync","Mode":"custom","CIDR":null,"Shared":false},"SourceTags":["wanderingwires-k8s-local-k8s-io-role-node"],"SourceRanges":null,"TargetTags":["wanderingwires-k8s-local-k8s-io-role-node"],"Allowed":["tcp","udp","icmp","esp","ah","sctp"],"Disabled":false}
I0128 07:45:25.080063 54184 executor.go:192] Executing task "FirewallRule/ssh-external-to-node-wanderingwires-k8s-local": *gcetasks.FirewallRule {"Name":"ssh-external-to-node-wanderingwires-k8s-local","Family":"ipv4","Lifecycle":"Sync","Network":{"Name":"wanderingwires-k8s-local","Project":null,"Lifecycle":"Sync","Mode":"custom","CIDR":null,"Shared":false},"SourceTags":null,"SourceRanges":["0.0.0.0/0"],"TargetTags":["wanderingwires-k8s-local-k8s-io-role-node"],"Allowed":["tcp:22"],"Disabled":false}
I0128 07:45:25.080089 54184 executor.go:192] Executing task "FirewallRule/node-to-master-wanderingwires-k8s-local": *gcetasks.FirewallRule {"Name":"node-to-master-wanderingwires-k8s-local","Family":"","Lifecycle":"Sync","Network":{"Name":"wanderingwires-k8s-local","Project":null,"Lifecycle":"Sync","Mode":"custom","CIDR":null,"Shared":false},"SourceTags":["wanderingwires-k8s-local-k8s-io-role-node"],"SourceRanges":null,"TargetTags":["wanderingwires-k8s-local-k8s-io-role-control-plane","wanderingwires-k8s-local-k8s-io-role-master"],"Allowed":["tcp:443","tcp:10250","tcp:3988","udp:3993","tcp:3993","udp:4000","tcp:4000","udp:8472"],"Disabled":false}
I0128 07:45:25.079860 54184 executor.go:192] Executing task "FirewallRule/ssh-external-to-node-ipv6-wanderingwires-k8s-local": *gcetasks.FirewallRule {"Name":"ssh-external-to-node-ipv6-wanderingwires-k8s-local","Family":"ipv6","Lifecycle":"Sync","Network":{"Name":"wanderingwires-k8s-local","Project":null,"Lifecycle":"Sync","Mode":"custom","CIDR":null,"Shared":false},"SourceTags":null,"SourceRanges":["::/0"],"TargetTags":["wanderingwires-k8s-local-k8s-io-role-node"],"Allowed":["tcp:22"],"Disabled":false}
I0128 07:45:25.206674 54184 changes.go:81] Field changed "CIDR" actual="10.0.16.0/20" expected="10.0.32.0/20"
I0128 07:45:25.209864 54184 bootstrapscript.go:92] Resolved alternateName "34.94.37.195" for "*gcetasks.Address {\"Name\":\"api-wanderingwires-k8s-local\",\"Lifecycle\":\"Sync\",\"IPAddress\":\"34.94.37.195\",\"IPAddressType\":null,\"Purpose\":null,\"ForAPIServer\":true,\"Subnetwork\":null}"
I0128 07:45:25.261363 54184 changes.go:179] comparing slices: 0 ::/0 ::/0
I0128 07:45:25.261407 54184 changes.go:179] comparing slices: 0 wanderingwires-k8s-local-k8s-io-role-node wanderingwires-k8s-local-k8s-io-role-node
I0128 07:45:25.261427 54184 changes.go:179] comparing slices: 0 tcp:22 tcp:22
I0128 07:45:25.265639 54184 changes.go:179] comparing slices: 0 0.0.0.0/0 0.0.0.0/0
I0128 07:45:25.265684 54184 changes.go:179] comparing slices: 0 wanderingwires-k8s-local-k8s-io-role-node wanderingwires-k8s-local-k8s-io-role-node
I0128 07:45:25.265703 54184 changes.go:179] comparing slices: 0 tcp:30000-32767 tcp:30000-32767
I0128 07:45:25.265724 54184 changes.go:179] comparing slices: 1 udp:30000-32767 udp:30000-32767
I0128 07:45:25.267587 54184 changes.go:179] comparing slices: 0 ::/0 ::/0
I0128 07:45:25.267630 54184 changes.go:179] comparing slices: 0 wanderingwires-k8s-local-k8s-io-role-control-plane wanderingwires-k8s-local-k8s-io-role-control-plane
I0128 07:45:25.267648 54184 changes.go:179] comparing slices: 1 wanderingwires-k8s-local-k8s-io-role-master wanderingwires-k8s-local-k8s-io-role-master
I0128 07:45:25.267672 54184 changes.go:179] comparing slices: 0 tcp:22 tcp:22
I0128 07:45:25.268192 54184 changes.go:179] comparing slices: 0 wanderingwires-k8s-local-k8s-io-role-node wanderingwires-k8s-local-k8s-io-role-node
I0128 07:45:25.268222 54184 changes.go:179] comparing slices: 0 wanderingwires-k8s-local-k8s-io-role-node wanderingwires-k8s-local-k8s-io-role-node
I0128 07:45:25.268241 54184 changes.go:179] comparing slices: 0 tcp tcp
I0128 07:45:25.268259 54184 changes.go:179] comparing slices: 1 udp udp
I0128 07:45:25.268277 54184 changes.go:179] comparing slices: 2 icmp icmp
I0128 07:45:25.268299 54184 changes.go:179] comparing slices: 3 esp esp
I0128 07:45:25.268317 54184 changes.go:179] comparing slices: 4 ah ah
I0128 07:45:25.268335 54184 changes.go:179] comparing slices: 5 sctp sctp
I0128 07:45:25.278706 54184 changes.go:179] comparing slices: 0 wanderingwires-k8s-local-k8s-io-role-control-plane wanderingwires-k8s-local-k8s-io-role-control-plane
I0128 07:45:25.278745 54184 changes.go:179] comparing slices: 1 wanderingwires-k8s-local-k8s-io-role-master wanderingwires-k8s-local-k8s-io-role-master
I0128 07:45:25.278788 54184 changes.go:179] comparing slices: 0 wanderingwires-k8s-local-k8s-io-role-node wanderingwires-k8s-local-k8s-io-role-node
I0128 07:45:25.278809 54184 changes.go:179] comparing slices: 0 tcp tcp
I0128 07:45:25.278828 54184 changes.go:179] comparing slices: 1 udp udp
I0128 07:45:25.278845 54184 changes.go:179] comparing slices: 2 icmp icmp
I0128 07:45:25.278864 54184 changes.go:179] comparing slices: 3 esp esp
I0128 07:45:25.278883 54184 changes.go:179] comparing slices: 4 ah ah
I0128 07:45:25.278902 54184 changes.go:179] comparing slices: 5 sctp sctp
I0128 07:45:25.278894 54184 changes.go:179] comparing slices: 0 ::/0 ::/0
I0128 07:45:25.278941 54184 changes.go:179] comparing slices: 0 wanderingwires-k8s-local-k8s-io-role-control-plane wanderingwires-k8s-local-k8s-io-role-control-plane
I0128 07:45:25.278964 54184 changes.go:179] comparing slices: 0 tcp:443 tcp:443
I0128 07:45:25.295477 54184 changes.go:179] comparing slices: 0 35.191.0.0/16 35.191.0.0/16
I0128 07:45:25.295506 54184 changes.go:179] comparing slices: 1 130.211.0.0/22 130.211.0.0/22
I0128 07:45:25.295524 54184 changes.go:179] comparing slices: 0 wanderingwires-k8s-local-k8s-io-role-control-plane wanderingwires-k8s-local-k8s-io-role-control-plane
I0128 07:45:25.295544 54184 changes.go:179] comparing slices: 0 tcp tcp
I0128 07:45:25.296112 54184 changes.go:179] comparing slices: 0 wanderingwires-k8s-local-k8s-io-role-node wanderingwires-k8s-local-k8s-io-role-node
I0128 07:45:25.296135 54184 changes.go:179] comparing slices: 0 wanderingwires-k8s-local-k8s-io-role-control-plane wanderingwires-k8s-local-k8s-io-role-control-plane
I0128 07:45:25.296150 54184 changes.go:179] comparing slices: 1 wanderingwires-k8s-local-k8s-io-role-master wanderingwires-k8s-local-k8s-io-role-master
I0128 07:45:25.296164 54184 changes.go:179] comparing slices: 0 tcp:443 tcp:443
I0128 07:45:25.296178 54184 changes.go:179] comparing slices: 1 tcp:10250 tcp:10250
I0128 07:45:25.296192 54184 changes.go:179] comparing slices: 2 tcp:3988 tcp:3988
I0128 07:45:25.296206 54184 changes.go:179] comparing slices: 3 udp:3993 udp:3993
I0128 07:45:25.296225 54184 changes.go:179] comparing slices: 4 tcp:3993 tcp:3993
I0128 07:45:25.296240 54184 changes.go:179] comparing slices: 5 udp:4000 udp:4000
I0128 07:45:25.296254 54184 changes.go:179] comparing slices: 6 tcp:4000 tcp:4000
I0128 07:45:25.296268 54184 changes.go:179] comparing slices: 7 udp:8472 udp:8472
I0128 07:45:25.297333 54184 changes.go:179] comparing slices: 0 0.0.0.0/0 0.0.0.0/0
I0128 07:45:25.297365 54184 changes.go:179] comparing slices: 0 wanderingwires-k8s-local-k8s-io-role-node wanderingwires-k8s-local-k8s-io-role-node
I0128 07:45:25.297385 54184 changes.go:179] comparing slices: 0 tcp:22 tcp:22
I0128 07:45:25.300179 54184 changes.go:179] comparing slices: 0 0.0.0.0/0 0.0.0.0/0
I0128 07:45:25.300223 54184 changes.go:179] comparing slices: 0 wanderingwires-k8s-local-k8s-io-role-control-plane wanderingwires-k8s-local-k8s-io-role-control-plane
I0128 07:45:25.300244 54184 changes.go:179] comparing slices: 0 tcp:443 tcp:443
I0128 07:45:25.300367 54184 changes.go:179] comparing slices: 0 wanderingwires-k8s-local-k8s-io-role-control-plane wanderingwires-k8s-local-k8s-io-role-control-plane
I0128 07:45:25.300397 54184 changes.go:179] comparing slices: 1 wanderingwires-k8s-local-k8s-io-role-master wanderingwires-k8s-local-k8s-io-role-master
I0128 07:45:25.300417 54184 changes.go:179] comparing slices: 0 wanderingwires-k8s-local-k8s-io-role-control-plane wanderingwires-k8s-local-k8s-io-role-control-plane
I0128 07:45:25.300436 54184 changes.go:179] comparing slices: 1 wanderingwires-k8s-local-k8s-io-role-master wanderingwires-k8s-local-k8s-io-role-master
I0128 07:45:25.300456 54184 changes.go:179] comparing slices: 0 tcp tcp
I0128 07:45:25.300474 54184 changes.go:179] comparing slices: 1 udp udp
I0128 07:45:25.300493 54184 changes.go:179] comparing slices: 2 icmp icmp
I0128 07:45:25.300510 54184 changes.go:179] comparing slices: 3 esp esp
I0128 07:45:25.300528 54184 changes.go:179] comparing slices: 4 ah ah
I0128 07:45:25.300546 54184 changes.go:179] comparing slices: 5 sctp sctp
I0128 07:45:25.301463 54184 changes.go:179] comparing slices: 0 ::/0 ::/0
I0128 07:45:25.301490 54184 changes.go:179] comparing slices: 0 wanderingwires-k8s-local-k8s-io-role-node wanderingwires-k8s-local-k8s-io-role-node
I0128 07:45:25.301505 54184 changes.go:179] comparing slices: 0 tcp:30000-32767 tcp:30000-32767
I0128 07:45:25.301520 54184 changes.go:179] comparing slices: 1 udp:30000-32767 udp:30000-32767
I0128 07:45:25.302484 54184 bootstrapscript.go:92] Resolved alternateName "34.94.37.195" for "*gcetasks.Address {\"Name\":\"api-wanderingwires-k8s-local\",\"Lifecycle\":\"Sync\",\"IPAddress\":\"34.94.37.195\",\"IPAddressType\":null,\"Purpose\":null,\"ForAPIServer\":true,\"Subnetwork\":null}"
I0128 07:45:25.317160 54184 changes.go:179] comparing slices: 0 0.0.0.0/0 0.0.0.0/0
I0128 07:45:25.317189 54184 changes.go:179] comparing slices: 0 wanderingwires-k8s-local-k8s-io-role-control-plane wanderingwires-k8s-local-k8s-io-role-control-plane
I0128 07:45:25.317196 54184 changes.go:179] comparing slices: 1 wanderingwires-k8s-local-k8s-io-role-master wanderingwires-k8s-local-k8s-io-role-master
I0128 07:45:25.317203 54184 changes.go:179] comparing slices: 0 tcp:22 tcp:22
W0128 07:45:25.317225 54184 executor.go:139] error running task "Subnet/us-west2-wanderingwires-k8s-local" (9m59s remaining to succeed): cannot apply changes to Subnet: *gcetasks.Subnet {"Name":null,"Lifecycle":"","Network":null,"Region":null,"CIDR":"10.0.32.0/20","StackType":null,"Ipv6AccessType":null,"SecondaryIpRanges":null,"Shared":null}
I0128 07:45:25.317253 54184 executor.go:111] Tasks: 63 done / 72 total; 5 can run
I0128 07:45:25.317368 54184 executor.go:192] Executing task "PoolHealthCheck/api-wanderingwires-k8s-local": *gcetasks.PoolHealthCheck {"Name":"api-wanderingwires-k8s-local","Lifecycle":"Sync","Healthcheck":{"Name":"api-wanderingwires-k8s-local","Lifecycle":"Sync","SelfLink":"https://www.googleapis.com/compute/v1/projects/wanderingwires/global/httpHealthChecks/api-wanderingwires-k8s-local","Port":3990,"RequestPath":"/healthz"},"Pool":{"Name":"api-wanderingwires-k8s-local","HealthCheck":{"Name":"api-wanderingwires-k8s-local","Lifecycle":"Sync","SelfLink":"https://www.googleapis.com/compute/v1/projects/wanderingwires/global/httpHealthChecks/api-wanderingwires-k8s-local","Port":3990,"RequestPath":"/healthz"},"Lifecycle":"Sync"}}
I0128 07:45:25.317342 54184 executor.go:192] Executing task "ManagedFile/nodeupconfig-nodes-us-west2-a": *fitasks.ManagedFile {"Name":"nodeupconfig-nodes-us-west2-a","Lifecycle":"Sync","Base":null,"Location":"igconfig/node/nodes-us-west2-a/nodeupconfig.yaml","Contents":{"resource":"Assets:\n amd64:\n - bf37335da58182783a8c63866ec1f895b4c436e3ed96bdd87fe3f8ae8004ba1d@https://dl.k8s.io/release/v1.28.5/bin/linux/amd64/kubelet,https://cdn.dl.k8s.io/release/v1.28.5/bin/linux/amd64/kubelet\n - 2a44c0841b794d85b7819b505da2ff3acd5950bd1bcd956863714acc80653574@https://dl.k8s.io/release/v1.28.5/bin/linux/amd64/kubectl,https://cdn.dl.k8s.io/release/v1.28.5/bin/linux/amd64/kubectl\n - 777e79ed325582b7260ff4775d1c7759cc26a4adeea6766ff2826e1409361b9e@https://dl.k8s.io/release/v1.28.5/bin/linux/amd64/mounter,https://cdn.dl.k8s.io/release/v1.28.5/bin/linux/amd64/mounter\n - f3a841324845ca6bf0d4091b4fc7f97e18a623172158b72fc3fdcdb9d42d2d37@https://storage.googleapis.com/k8s-artifacts-cni/release/v1.2.0/cni-plugins-linux-amd64-v1.2.0.tgz\n - 371de359d6102c51f6ee2361d08297948d134ce7379e01cb965ceeffa4365fba@https://github.com/containerd/containerd/releases/download/v1.7.7/containerd-1.7.7-linux-amd64.tar.gz\n - b9bfdd4cb27cddbb6172a442df165a80bfc0538a676fbca1a6a6c8f4c6933b43@https://github.com/opencontainers/runc/releases/download/v1.1.9/runc.amd64\n - fe602e26bb0f0b10239bc57c106bc792de0433c4008549f1cbd29c3dc82adbf8@https://artifacts.k8s.io/binaries/kops/1.28.3/linux/amd64/protokube,https://github.com/kubernetes/kops/releases/download/v1.28.3/protokube-linux-amd64\n - b2822daa9b6ab1ae17cbbe47d2a5f9b60a22dede106e28c6e00b21d078aaf918@https://artifacts.k8s.io/binaries/kops/1.28.3/linux/amd64/channels,https://github.com/kubernetes/kops/releases/download/v1.28.3/channels-linux-amd64\n arm64:\n - 28ddb696eb6e076f2a2f59ccaa2e409785a63346e5bda819717c6e0f58297702@https://dl.k8s.io/release/v1.28.5/bin/linux/arm64/kubelet,https://cdn.dl.k8s.io/release/v1.28.5/bin/linux/arm64/kubelet\n - f87fe017ae3ccfd93df03bf17edd4089672528107f230563b8c9966909661ef2@https://dl.k8s.io/release/v1.28.5/bin/linux/arm64/kubectl,https://cdn.dl.k8s.io/release/v1.28.5/bin/linux/arm64/kubectl\n - 4a5474cdfc08b1f8796c29d63ae90c125b880e0f5c0e99175ad50ed9b56745a4@https://dl.k8s.io/release/v1.28.5/bin/linux/arm64/mounter,https://cdn.dl.k8s.io/release/v1.28.5/bin/linux/arm64/mounter\n - 525e2b62ba92a1b6f3dc9612449a84aa61652e680f7ebf4eff579795fe464b57@https://storage.googleapis.com/k8s-artifacts-cni/release/v1.2.0/cni-plugins-linux-arm64-v1.2.0.tgz\n - 0a104f487193665d2681fcb5ed83f2baa5f97849fe2661188da835c9d4eaf9e3@https://github.com/containerd/containerd/releases/download/v1.7.7/containerd-1.7.7-linux-arm64.tar.gz\n - b43e9f561e85906f469eef5a7b7992fc586f750f44a0e011da4467e7008c33a0@https://github.com/opencontainers/runc/releases/download/v1.1.9/runc.arm64\n - 4b260a2648ca92ff504210928bceaf5e78c97ac061c4b878252dd5742ebe0564@https://artifacts.k8s.io/binaries/kops/1.28.3/linux/arm64/protokube,https://github.com/kubernetes/kops/releases/download/v1.28.3/protokube-linux-arm64\n - a57861ca48667c43b90eef3a7df744421db4cf9655257beb7606c52695b8fd22@https://artifacts.k8s.io/binaries/kops/1.28.3/linux/arm64/channels,https://github.com/kubernetes/kops/releases/download/v1.28.3/channels-linux-arm64\nCAs: {}\nClusterName: wanderingwires.k8s.local\nContainerRuntime: containerd\nHooks:\n- null\n- null\nKeypairIDs:\n kubernetes-ca: \"7329173026093425248667361338\"\nKubeProxy: null\nKubeletConfig:\n anonymousAuth: false\n cgroupDriver: systemd\n cgroupRoot: /\n cloudProvider: external\n clusterDNS: 100.64.0.10\n clusterDomain: cluster.local\n enableDebuggingHandlers: true\n evictionHard: memory.available\u003c100Mi,nodefs.available\u003c10%,nodefs.inodesFree\u003c5%,imagefs.available\u003c10%,imagefs.inodesFree\u003c5%\n hairpinMode: promiscuous-bridge\n kubeconfigPath: /var/lib/kubelet/kubeconfig\n logLevel: 2\n nodeLabels:\n cloud.google.com/metadata-proxy-ready: \"true\"\n node-role.kubernetes.io/node: \"\"\n podInfraContainerImage: registry.k8s.io/pause:3.9@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\n podManifestPath: /etc/kubernetes/manifests\n protectKernelDefaults: true\n registerSchedulable: true\n shutdownGracePeriod: 30s\n shutdownGracePeriodCriticalPods: 10s\nKubernetesVersion: 1.28.5\nNetworking:\n cilium: {}\n nonMasqueradeCIDR: 100.64.0.0/10\n serviceClusterIPRange: 100.64.0.0/13\nUpdatePolicy: automatic\nchannels:\n- gs://wanderingwires-clusters/wanderingwires.k8s.local/addons/bootstrap-channel.yaml\ncontainerdConfig:\n logLevel: info\n runc:\n version: 1.1.9\n version: 1.7.7\ndocker:\n skipInstall: true\nmultizone: true\nnodeTags: wanderingwires-k8s-local-k8s-io-role-node\nusesLegacyGossip: true\nusesNoneDNS: false\n","task":{"Name":"nodes-us-west2-a","Lifecycle":"Sync"}},"PublicACL":null}
I0128 07:45:25.317543 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/igconfig/node/nodes-us-west2-a/nodeupconfig.yaml"
I0128 07:45:25.317292 54184 executor.go:192] Executing task "ForwardingRule/api-wanderingwires-k8s-local": *gcetasks.ForwardingRule {"Name":"api-wanderingwires-k8s-local","Lifecycle":"Sync","PortRange":"443-443","Ports":null,"TargetPool":{"Name":"api-wanderingwires-k8s-local","HealthCheck":{"Name":"api-wanderingwires-k8s-local","Lifecycle":"Sync","SelfLink":"https://www.googleapis.com/compute/v1/projects/wanderingwires/global/httpHealthChecks/api-wanderingwires-k8s-local","Port":3990,"RequestPath":"/healthz"},"Lifecycle":"Sync"},"IPAddress":{"Name":"api-wanderingwires-k8s-local","Lifecycle":"Sync","IPAddress":"34.94.37.195","IPAddressType":null,"Purpose":null,"ForAPIServer":true,"Subnetwork":null},"RuleIPAddress":null,"IPProtocol":"TCP","LoadBalancingScheme":"EXTERNAL","Network":null,"Subnetwork":null,"BackendService":null,"Labels":{"k8s-io-cluster-name":"wanderingwires-k8s-local","name":"api"}}
I0128 07:45:25.317284 54184 executor.go:192] Executing task "ManagedFile/nodeupconfig-control-plane-us-west2-a": *fitasks.ManagedFile {"Name":"nodeupconfig-control-plane-us-west2-a","Lifecycle":"Sync","Base":null,"Location":"igconfig/control-plane/control-plane-us-west2-a/nodeupconfig.yaml","Contents":{"resource":"APIServerConfig:\n API: {}\n ClusterDNSDomain: cluster.local\n KubeAPIServer:\n allowPrivileged: true\n anonymousAuth: false\n apiAudiences:\n - kubernetes.svc.default\n apiServerCount: 1\n authorizationMode: Node,RBAC\n bindAddress: 0.0.0.0\n cloudProvider: external\n enableAdmissionPlugins:\n - NamespaceLifecycle\n - LimitRanger\n - ServiceAccount\n - DefaultStorageClass\n - DefaultTolerationSeconds\n - MutatingAdmissionWebhook\n - ValidatingAdmissionWebhook\n - NodeRestriction\n - ResourceQuota\n etcdServers:\n - https://127.0.0.1:4001\n etcdServersOverrides:\n - /events#https://127.0.0.1:4002\n image: registry.k8s.io/kube-apiserver:v1.28.5@sha256:4bb6f46baa98052399ee2270d5912edb97d4f8602ea2e2700f0527a887228112\n kubeletPreferredAddressTypes:\n - InternalIP\n - Hostname\n - ExternalIP\n logLevel: 2\n requestheaderAllowedNames:\n - aggregator\n requestheaderExtraHeaderPrefixes:\n - X-Remote-Extra-\n requestheaderGroupHeaders:\n - X-Remote-Group\n requestheaderUsernameHeaders:\n - X-Remote-User\n securePort: 443\n serviceAccountIssuer: https://api.internal.wanderingwires.k8s.local\n serviceAccountJWKSURI: https://api.internal.wanderingwires.k8s.local/openid/v1/jwks\n serviceClusterIPRange: 100.64.0.0/13\n storageBackend: etcd3\n ServiceAccountPublicKeys: |\n -----BEGIN RSA PUBLIC KEY-----\n MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAvaabiJlPUbcQaafO36Rm\n 6s2g3pDORVPrNWOLbAogE5u1cpTtAr0nDmqjBjiKxRCa91AqpcDnpHZJ4Dv8/eIP\n c5+lTXNaYTzSbmUC48Lsv03Og0Yh09fmpbU4YxP7FOJf8rMrhWzPn5bPO75eZpEe\n bVU9LcHmfR4OkUExlaTpFFRLPwNRENSfuTnQkMWAy+835QZ4YrHEhzTSXrsPryPz\n 8bhQo4vScFv0qcaJNCky1C3rdi/NUmhXfYDpdA78WAwI66d5ZE+QSK/6+1XNNuiY\n M4MUi+iLppJJ8YmEpsONswpc4/KmFhD+Q2RSjwfoCah3x5a6teLozBPg3oO1BVZb\n cQIDAQAB\n -----END RSA PUBLIC KEY-----\nApiserverAdditionalIPs:\n- 34.94.37.195\nAssets:\n amd64:\n - bf37335da58182783a8c63866ec1f895b4c436e3ed96bdd87fe3f8ae8004ba1d@https://dl.k8s.io/release/v1.28.5/bin/linux/amd64/kubelet,https://cdn.dl.k8s.io/release/v1.28.5/bin/linux/amd64/kubelet\n - 2a44c0841b794d85b7819b505da2ff3acd5950bd1bcd956863714acc80653574@https://dl.k8s.io/release/v1.28.5/bin/linux/amd64/kubectl,https://cdn.dl.k8s.io/release/v1.28.5/bin/linux/amd64/kubectl\n - 777e79ed325582b7260ff4775d1c7759cc26a4adeea6766ff2826e1409361b9e@https://dl.k8s.io/release/v1.28.5/bin/linux/amd64/mounter,https://cdn.dl.k8s.io/release/v1.28.5/bin/linux/amd64/mounter\n - f3a841324845ca6bf0d4091b4fc7f97e18a623172158b72fc3fdcdb9d42d2d37@https://storage.googleapis.com/k8s-artifacts-cni/release/v1.2.0/cni-plugins-linux-amd64-v1.2.0.tgz\n - 371de359d6102c51f6ee2361d08297948d134ce7379e01cb965ceeffa4365fba@https://github.com/containerd/containerd/releases/download/v1.7.7/containerd-1.7.7-linux-amd64.tar.gz\n - b9bfdd4cb27cddbb6172a442df165a80bfc0538a676fbca1a6a6c8f4c6933b43@https://github.com/opencontainers/runc/releases/download/v1.1.9/runc.amd64\n - fe602e26bb0f0b10239bc57c106bc792de0433c4008549f1cbd29c3dc82adbf8@https://artifacts.k8s.io/binaries/kops/1.28.3/linux/amd64/protokube,https://github.com/kubernetes/kops/releases/download/v1.28.3/protokube-linux-amd64\n - b2822daa9b6ab1ae17cbbe47d2a5f9b60a22dede106e28c6e00b21d078aaf918@https://artifacts.k8s.io/binaries/kops/1.28.3/linux/amd64/channels,https://github.com/kubernetes/kops/releases/download/v1.28.3/channels-linux-amd64\n arm64:\n - 28ddb696eb6e076f2a2f59ccaa2e409785a63346e5bda819717c6e0f58297702@https://dl.k8s.io/release/v1.28.5/bin/linux/arm64/kubelet,https://cdn.dl.k8s.io/release/v1.28.5/bin/linux/arm64/kubelet\n - f87fe017ae3ccfd93df03bf17edd4089672528107f230563b8c9966909661ef2@https://dl.k8s.io/release/v1.28.5/bin/linux/arm64/kubectl,https://cdn.dl.k8s.io/release/v1.28.5/bin/linux/arm64/kubectl\n - 4a5474cdfc08b1f8796c29d63ae90c125b880e0f5c0e99175ad50ed9b56745a4@https://dl.k8s.io/release/v1.28.5/bin/linux/arm64/mounter,https://cdn.dl.k8s.io/release/v1.28.5/bin/linux/arm64/mounter\n - 525e2b62ba92a1b6f3dc9612449a84aa61652e680f7ebf4eff579795fe464b57@https://storage.googleapis.com/k8s-artifacts-cni/release/v1.2.0/cni-plugins-linux-arm64-v1.2.0.tgz\n - 0a104f487193665d2681fcb5ed83f2baa5f97849fe2661188da835c9d4eaf9e3@https://github.com/containerd/containerd/releases/download/v1.7.7/containerd-1.7.7-linux-arm64.tar.gz\n - b43e9f561e85906f469eef5a7b7992fc586f750f44a0e011da4467e7008c33a0@https://github.com/opencontainers/runc/releases/download/v1.1.9/runc.arm64\n - 4b260a2648ca92ff504210928bceaf5e78c97ac061c4b878252dd5742ebe0564@https://artifacts.k8s.io/binaries/kops/1.28.3/linux/arm64/protokube,https://github.com/kubernetes/kops/releases/download/v1.28.3/protokube-linux-arm64\n - a57861ca48667c43b90eef3a7df744421db4cf9655257beb7606c52695b8fd22@https://artifacts.k8s.io/binaries/kops/1.28.3/linux/arm64/channels,https://github.com/kubernetes/kops/releases/download/v1.28.3/channels-linux-arm64\nCAs:\n apiserver-aggregator-ca: |\n -----BEGIN CERTIFICATE-----\n MIIDDDCCAfSgAwIBAgIMF66MwcTRHZXSprA8MA0GCSqGSIb3DQEBCwUAMCIxIDAe\n BgNVBAMTF2FwaXNlcnZlci1hZ2dyZWdhdG9yLWNhMB4XDTI0MDEyNjE1MzQzMloX\n DTM0MDEyNTE1MzQzMlowIjEgMB4GA1UEAxMXYXBpc2VydmVyLWFnZ3JlZ2F0b3It\n Y2EwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCYdCubCnch+Dt71MHO\n MQnz2Ua9TG2FpaqrLgmyWXqxR1RE3xNlytBlhGhT/1XRM3D7gnSI6pxtxooN7nRK\n /OxmZriy1JmFExkmzxKS8Z14QcBGsNQ6ylTNS8v+WFARF6nrn9nsPBDW7Ek9Y12/\n cw1q8ybMDzuiprbXSLdN79j+GPhBz+U02ml3ub+f9ABw2f6f22b5hA4odDnHYG/0\n CJv6hFVJqJO/Q/OTTF1xjVqqWZdCjFrr7zl2BtMfhIJSFJPU3QCfpz9Nczm/kFNO\n nXdLSccv1sb4j6AuiPKHaSGRbbw3uiBfHA9Ls/l5eogvho8v/lJch+/JL6ifTomh\n Ij5FAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MB0G\n A1UdDgQWBBR0Y+0olqEpzgedQeRHD2oDLevPXDANBgkqhkiG9w0BAQsFAAOCAQEA\n hHv9SVcLBCga5AXgKK+iZPAz0yMay9KqrCnFWK2ZuiC3Ulh8khmsxosbvSRQF3t1\n EO9nmWRnRGcyaUeIF2BcS0os1ys5ht6PfaLGfbw/Pabqrm7sllNbeJHOXRp42PYf\n VNf34hPauf1/dBnXESC4Fg4jFJ7DVOqvc2Ur2+6sqXsfxrhJTVz/ouokNGefjvVS\n ob76Grvn+hHjjYKhDuYALhP5lXrKIRxycodNqz0Xo7dNwDEAPZ8WeJVhm3GIcjaC\n uGHCZcBWaogVkZw3+SHBl5YHA4UUjyHNvV0nFtLdHXw45ZSlZulJ6wlXFBVAM1Nx\n hwP9EQ6kea2ID70RcSV4og==\n -----END CERTIFICATE-----\n etcd-clients-ca: |\n -----BEGIN CERTIFICATE-----\n MIIC/DCCAeSgAwIBAgIMF66MwcTMOaAXjflnMA0GCSqGSIb3DQEBCwUAMBoxGDAW\n BgNVBAMTD2V0Y2QtY2xpZW50cy1jYTAeFw0yNDAxMjYxNTM0MzJaFw0zNDAxMjUx\n NTM0MzJaMBoxGDAWBgNVBAMTD2V0Y2QtY2xpZW50cy1jYTCCASIwDQYJKoZIhvcN\n AQEBBQADggEPADCCAQoCggEBAMBJWu+f9fvOF7c/mQHOs/Uj6uib1Hg+4lgAOxQ2\n oy2jjm+F15BFGNAY9hLk2VkprKFhPKe4ZN7pA71MXHAcKrZrcY60e7HvAUFnU1kS\n CBGvjZGo4r6hAWtixjTgvn0JufMVukoRsAcODTN6Pn+ITpacFtU9MgasYROu1J9y\n XAMuXO57Z8/ehtHbqBZ8QFgLuc3bi5N6Lccfxep4EApUGg8YH+rV3gPfpwjNoYF3\n UCEoM9qbmRR0dAYyDUqq6gHKndtGWrKVtw45MSQBLK7Y3SYnCZwBxTuP55W/lFpI\n 06HZp/dx1o4mcXy7VsTtfFhWB6MmzFcWXGh7d0WFZ5dzEf8CAwEAAaNCMEAwDgYD\n VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFPdGLhqRI272\n Ea7Wt79Ze5K29UD2MA0GCSqGSIb3DQEBCwUAA4IBAQB+ytga+XYG81C4Eqf7Gb++\n PUVwjCU0iyQqXiEWx8mHecfeR0TaNGGBXekt2Im+7wSiA+f4dB7yIV0fbWLgK+Ks\n WuLdiYYcqys2nwF6By/FDzJge+AOEVJG8w0EYLe16a0YgaajOZtE1juG94Nv8x/j\n 0eY5Njc2gpjS5EXPGJcLNBl1Z6N9HwHvqGtM1b0FfHGRAR95OhJm5+X1f9azArlG\n UBTd6ca+yq2+SFE921EXV+bVrK64ZrXxB1HaYyli83AWxflhJo2lu+QGyHVa4lDo\n Lsa6qpE/H8ehYnP614S82EjzPkm+A5bN15i7x2v+/IMLK9szUbyREK3syK0NcynR\n -----END CERTIFICATE-----\n etcd-manager-ca-events: |\n -----BEGIN CERTIFICATE-----\n MIIDCjCCAfKgAwIBAgIMF66MwcTUUwny/I66MA0GCSqGSIb3DQEBCwUAMCExHzAd\n BgNVBAMTFmV0Y2QtbWFuYWdlci1jYS1ldmVudHMwHhcNMjQwMTI2MTUzNDMyWhcN\n MzQwMTI1MTUzNDMyWjAhMR8wHQYDVQQDExZldGNkLW1hbmFnZXItY2EtZXZlbnRz\n MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA5moBs2bt+Jk4R4y70qvF\n w0BSgb8VT98VgamFmunihDwnDWpOmHVI0ebBW9YaJ6rjOYlozWPtlu/v0guyUWi2\n vL7LpsYdSC184IZJ5Lab9xnrGjFWB4r+UCKl7B6xtyzQaKL290oOZrrY/3qBnJ2W\n 6Gy+/KUjPTtxKyHxXsp0bDfnImHCu7ZL93jd2GgSXk8QJA8xYB3C6kdaBwWA2v0B\n 8K0M3lREYUce96U1U6JFbGxXgiFZ//SJn48f/KPTwAoY5jJF1aLrozoNHOhZWdcf\n LzhI3KeAep/lQNYurWtmixNcvP73fg3yoVTiLdHNWRobZ3jNLczCN/nneOSyZBj1\n 1QIDAQABo0IwQDAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNV\n HQ4EFgQURcNcyBG3+0cCKIGnKlzeCFLjTuYwDQYJKoZIhvcNAQELBQADggEBAETn\n qK0UB7x29cCBERWQ60I0wU1myR+Q1bnTuqoRw7mn8qmOQCcs3k/og5VJlkj48K0S\n CrTS3a74ZjAadjYDZxvP/F15yNcQa/Ti9o0HeCtzW9kgEr3BkwHnWfh0SSl0qtZ+\n z5JIKo4vSt5ucsmSDwVrLFgTy/6Tt/jHzDz+uzvRQf3vaXjw4zc6bBVfI4K73eUE\n ZS+ybJB+lfPwiao/w2DhhAfgJYykf2JE2QjnouiwL5l13ppC8BIKtDKz8RVZupgD\n 7qEST39/SQXPXS1w04c5oDNlnG51zfCnhSJOf42ukignBmL2ZprkbbhEtgzcTJn3\n N+ma4POS5bJTpOWyZrw=\n -----END CERTIFICATE-----\n etcd-manager-ca-main: |\n -----BEGIN CERTIFICATE-----\n MIIDBjCCAe6gAwIBAgIMF66MwcTO16q0cerhMA0GCSqGSIb3DQEBCwUAMB8xHTAb\n BgNVBAMTFGV0Y2QtbWFuYWdlci1jYS1tYWluMB4XDTI0MDEyNjE1MzQzMloXDTM0\n MDEyNTE1MzQzMlowHzEdMBsGA1UEAxMUZXRjZC1tYW5hZ2VyLWNhLW1haW4wggEi\n MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDnKqOa312Tx39pLlzMUjfBp3oW\n cFAJeGAvO70vfB1LTj8GSLfw/d/WIil2FJFqV5aYbD7bHEgA3UhN72BB5PwFYUn5\n T5sgWEhlGYKFGk/CD9gRmDUPIp3Mi+Gjle0bnqxeYNNdiIKpdkK5onZSeHcv9nDU\n YVTi2zte+H6LiAV59Psfyox8+WzKTJEx/y2fdlgb0hvT63gNF1BZtHDXHtXeBj6w\n aLJBfn29/NVhgdqK01tsCaN3NuSablxcKWQw+fZJi7PYW4U4pFsyBhGHV6JP/kcj\n 60Xa0FlOSjBJ3bas/vafH79NSjZ5mO06RdvtVucr0UdslIaGsDjl5+VcTS+pAgMB\n AAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\n BBSWPls3Vl7qiI0OEZKs04rc50F7cjANBgkqhkiG9w0BAQsFAAOCAQEA4QtlD5GP\n E37ApdLl6JA4rP3sgFZo9L+6d3ZRlC0jOYSvIaKfzSYX322moc4OViSrUJh5FCPK\n ysQruoSkWIYc6v0j7mye787oV3vIH2s8zG+k8yRClGlFwT6ZpyRHtufUmF8Emk0S\n V9lROz0igbr2UMA+l7XS1KFoQ9CgyClsD2WBVEGhiG4LU34lfFl6tS2js6MJEziS\n yxeVoIc5BHdpKGnjytFwpsmPQs7quVcnEB43zx6N+7d1zm23uLi+AGo/DxpF8XbH\n kDSeJQrKD4x2gyeFbTwBHbE/YZACXL8snT356pVyYS4PvOuvp3xrgvf3u4nkuxy2\n jHgDxxriS0BTkw==\n -----END CERTIFICATE-----\n etcd-peers-ca-events: |\n -----BEGIN CERTIFICATE-----\n MIIDBjCCAe6gAwIBAgIMF66MwcTNYv9nZsGIMA0GCSqGSIb3DQEBCwUAMB8xHTAb\n BgNVBAMTFGV0Y2QtcGVlcnMtY2EtZXZlbnRzMB4XDTI0MDEyNjE1MzQzMloXDTM0\n MDEyNTE1MzQzMlowHzEdMBsGA1UEAxMUZXRjZC1wZWVycy1jYS1ldmVudHMwggEi\n MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC9jt6A53aqRVk5Jthgh5ur/Brm\n OPTS3wOdqw8Cf1F37E5ndp7Sja8dBSvZEzEgrdDdNwDFJVQEX812EUSit6c/+i7b\n vERkcdTZ9y5cpN2RbxXp3W+N85OsmbE5MJxi5pF1irmi7lMSKJMoVka2Tf5PevV2\n 761vTnNuq2teESBYsEGfSRoO1l64fXlnEuEgpAXwoL+dnBZajDWGXSAiIZU424uR\n GzUmsSO71101MLuuSDGlrpsWv11XAFsGU35ze8uQo0CTDF3vbqUJKXmM8tKPrnJH\n +Y47SZpwniEBOt7F78Eby1N2k6PgXC55J2xXIUValwyY1nXnG5AZqnQZrmMJAgMB\n AAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\n BBRbAQJ0zEh4MxSrSpPyoe5qKHw4LDANBgkqhkiG9w0BAQsFAAOCAQEAQ2EWBTJq\n 07hbtMf2bgfdL1lgoXinqCqesDYA7Y3Ok6wfbFnPtKacoDJgHKFCKvX+TQmAsEIs\n S6E/sl8DF1AI8kadmObZcrnOVrdP0HXojI/T0Nh/k/mvAKkdhcYsJkHBa4/g9cVv\n Ix6COQiGDp4oB+ACO77JDm0YHAwrhe3GxahPiKL3lUmmDbQotdB3AU1jA763WFgW\n 82H21k2FRNTAxAFAWzhtSjztCJyU5YGuNPFaREnovI1N1gtOZKP6YrtI3+Uob4Gu\n VVM2rjUKoT4yDdT+PrACRvWGi9qdwdtCYubvNqDw5BopKdw/eF1NyhjzDr6LpZ/B\n fxA0wgueM7IFwg==\n -----END CERTIFICATE-----\n etcd-peers-ca-main: |\n -----BEGIN CERTIFICATE-----\n MIIDAjCCAeqgAwIBAgIMF66MwcTLkmOsJJDWMA0GCSqGSIb3DQEBCwUAMB0xGzAZ\n BgNVBAMTEmV0Y2QtcGVlcnMtY2EtbWFpbjAeFw0yNDAxMjYxNTM0MzJaFw0zNDAx\n MjUxNTM0MzJaMB0xGzAZBgNVBAMTEmV0Y2QtcGVlcnMtY2EtbWFpbjCCASIwDQYJ\n KoZIhvcNAQEBBQADggEPADCCAQoCggEBAMqFSqO3H41E1I7HM2hn4VvXv6Ogq8aT\n 6HOtTuFDYjsqRFc1E4pIFfTKAmmsYHgdMPTjSF9HUd18hA+hR5j0vvCFhDyedVev\n lFuHrW+dYIaoQ3Si57oitTJ8k9BuxE9QFZJhnq3MX7WAI7F4MezrGslfJzfXaLoW\n bkIlLq4AQKEzlhb9OzS4YTsUyzhOOg0wZNcJ4N5o2p+cNvwaOPqmRexnz9GJ5zVr\n Cw1DgJVbVubS9nUXsT4vl4eTd7QyCPEL48sPkGmOdhNTlLKJjIUpjQHvlg4ieg5X\n i5FHALqMG9QofzOzcFraESyF4vJOi2vvRcvOhG3zoGhAnT+Vv/k6G+cCAwEAAaNC\n MEAwDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFNaH\n 78VD7C39t+UZzWOL1bQj7Hp7MA0GCSqGSIb3DQEBCwUAA4IBAQCjW6IP5cjZQms6\n kBza8lj5a2BFzuXWrfniv/pcYQGTGJIe9xfx5w2xvI3NvSx2iNDFphemRhPzKurF\n EUAmc0FsNVsWChYS/HFSXs+vC6trKo788fJiWJV+kRVfD5ElWPIlpH/Bbbt09hXD\n +UpaMwc8464su+nsjWeLQiO7bwgUovrfvp4l56As4LOjK/Sw+eqVi2Q/oFT9WNYH\n Q6wiVGvlkVWvcU0F8UsDbBPc2W5cB9nRb/8uue9u6sPi2MyyHyjBOlf7ehjpLhMn\n TY4iAFaIT8wzsVItI+Hfz8yAX/FIdbnUA4EBQgQEFZEFtyWNHVBGkrlCC6hN1P/X\n vbxTiPvQ\n -----END CERTIFICATE-----\n kubernetes-ca: |\n -----BEGIN CERTIFICATE-----\n MIIC+DCCAeCgAwIBAgIMF66Mwd7hxcl7JuA6MA0GCSqGSIb3DQEBCwUAMBgxFjAU\n BgNVBAMTDWt1YmVybmV0ZXMtY2EwHhcNMjQwMTI2MTUzNDMzWhcNMzQwMTI1MTUz\n NDMzWjAYMRYwFAYDVQQDEw1rdWJlcm5ldGVzLWNhMIIBIjANBgkqhkiG9w0BAQEF\n AAOCAQ8AMIIBCgKCAQEAzB/1nxQpMl/yGgmI/NoCE+cPcG9+TCm/3e3Fr+Ev+oZd\n gH4Zmc1PV3oQmE78+plfkXKinItkOfnHxA5I34jffd6FErv9zqDPTsfWwMYQPFw3\n 1197ZOZPHl7Gk6XdEWmpAGjhydLNqYOAsynU9FBNl0HJ6OKfTSsp9lZ8UQLwZbrm\n Ew4D3VruLgGa0P+8ls6Z5qt0+BfNw3tslpnDEJwbEkgqgABR29kxCDuzne5oey1c\n IJEFSMp/kj7IpKfdvvhyJBuH0FfddHUfMgmZl3SfudHVlhIwLRdlWdoqEfA4sV7L\n vNvNYlNeYVbdl8FwNOlk6NXoHPb2C6jnC2osrd93wwIDAQABo0IwQDAOBgNVHQ8B\n Af8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUrvf8N8a1+0e4gLtn\n w2JH+qo3WcgwDQYJKoZIhvcNAQELBQADggEBADJjeyJbRKywUBDbT/iH8nR4Gbp8\n 6E+Y7tdylhtKCasL64smyYXT01gLzLf/zDnDU0lgk4niILzmBHdJg/SnqYBm4WVA\n N4abMY9TooyY8WmB0Am818uftqsTYquAZi1WzD/5qLJQFVBgjy/JbRYnsKnwEy4+\n S3EsYtz4HSRYZFn+KT9hjlokFptbF7v5MLKblGpLBTMdleKGSbi/A7n7rBkOMgWt\n KLNPG0a3lu+bqsq9gM05ghWAUWsgQn4MgzjLU2pGTWjsoWlMjOFoXKtLo294lNUm\n Qfu6VJsjGRTWA51IihIpzv6bxqu97SX2BumWKfPJfiT5nbXgvLNWRMlexwo=\n -----END CERTIFICATE-----\nClusterName: wanderingwires.k8s.local\nContainerRuntime: containerd\nControlPlaneConfig:\n KubeControllerManager:\n allocateNodeCIDRs: true\n attachDetachReconcileSyncPeriod: 1m0s\n cloudProvider: external\n clusterCIDR: 100.96.0.0/11\n clusterName: wanderingwires.k8s.local\n configureCloudRoutes: false\n image: registry.k8s.io/kube-controller-manager:v1.28.5@sha256:6e8c9171f74a4e3fadedce8f865f58092a597650a709702f2b122f6ca3b6cd32\n leaderElection:\n leaderElect: true\n logLevel: 2\n useServiceAccountCredentials: true\n KubeScheduler:\n image: registry.k8s.io/kube-scheduler:v1.28.5@sha256:9a48e33e454c904cf53c144a8d0ae3f5a4c2c5bd086b8953ad7d5a637b9aa007\n leaderElection:\n leaderElect: true\n logLevel: 2\nEtcdClusterNames:\n- main\n- events\nFileAssets:\n- content: |\n apiVersion: kubescheduler.config.k8s.io/v1\n clientConnection:\n kubeconfig: /var/lib/kube-scheduler/kubeconfig\n kind: KubeSchedulerConfiguration\n path: /var/lib/kube-scheduler/config.yaml\nHooks:\n- null\n- null\nKeypairIDs:\n apiserver-aggregator-ca: \"7329173024215239264230092860\"\n etcd-clients-ca: \"7329173024213862719777798503\"\n etcd-manager-ca-events: \"7329173024216142462035201722\"\n etcd-manager-ca-main: \"7329173024214599438150265569\"\n etcd-peers-ca-events: \"7329173024214189684092748168\"\n etcd-peers-ca-main: \"7329173024213678841835851990\"\n kubernetes-ca: \"7329173026093425248667361338\"\n service-account: \"7329173026094200060060023913\"\nKubeProxy: null\nKubeletConfig:\n anonymousAuth: false\n cgroupDriver: systemd\n cgroupRoot: /\n cloudProvider: external\n clusterDNS: 100.64.0.10\n clusterDomain: cluster.local\n enableDebuggingHandlers: true\n evictionHard: memory.available\u003c100Mi,nodefs.available\u003c10%,nodefs.inodesFree\u003c5%,imagefs.available\u003c10%,imagefs.inodesFree\u003c5%\n hairpinMode: promiscuous-bridge\n kubeconfigPath: /var/lib/kubelet/kubeconfig\n logLevel: 2\n nodeLabels:\n kops.k8s.io/kops-controller-pki: \"\"\n node-role.kubernetes.io/control-plane: \"\"\n node.kubernetes.io/exclude-from-external-load-balancers: \"\"\n podInfraContainerImage: registry.k8s.io/pause:3.9@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\n podManifestPath: /etc/kubernetes/manifests\n protectKernelDefaults: true\n registerSchedulable: true\n shutdownGracePeriod: 30s\n shutdownGracePeriodCriticalPods: 10s\n taints:\n - node-role.kubernetes.io/control-plane=:NoSchedule\nKubernetesVersion: 1.28.5\nNetworking:\n cilium: {}\n nonMasqueradeCIDR: 100.64.0.0/10\n serviceClusterIPRange: 100.64.0.0/13\nUpdatePolicy: automatic\nchannels:\n- gs://wanderingwires-clusters/wanderingwires.k8s.local/addons/bootstrap-channel.yaml\nconfigStore:\n keypairs: gs://wanderingwires-clusters/wanderingwires.k8s.local/pki\n secrets: gs://wanderingwires-clusters/wanderingwires.k8s.local/secrets\ncontainerdConfig:\n logLevel: info\n runc:\n version: 1.1.9\n version: 1.7.7\ndocker:\n skipInstall: true\netcdManifests:\n- gs://wanderingwires-clusters/wanderingwires.k8s.local/manifests/etcd/main-control-plane-us-west2-a.yaml\n- gs://wanderingwires-clusters/wanderingwires.k8s.local/manifests/etcd/events-control-plane-us-west2-a.yaml\nmultizone: true\nnodeTags: wanderingwires-k8s-local-k8s-io-role-node\nstaticManifests:\n- key: kube-apiserver-healthcheck\n path: manifests/static/kube-apiserver-healthcheck.yaml\nusesLegacyGossip: true\nusesNoneDNS: false\n","task":{"Name":"control-plane-us-west2-a","Lifecycle":"Sync"}},"PublicACL":null}
I0128 07:45:25.317910 54184 gsfs.go:278] Reading file "gs://wanderingwires-clusters/wanderingwires.k8s.local/igconfig/control-plane/control-plane-us-west2-a/nodeupconfig.yaml"
I0128 07:45:25.317426 54184 executor.go:192] Executing task "Subnet/us-west2-wanderingwires-k8s-local": *gcetasks.Subnet {"Name":"us-west2-wanderingwires-k8s-local","Lifecycle":"Sync","Network":{"Name":"wanderingwires-k8s-local","Project":null,"Lifecycle":"Sync","Mode":"custom","CIDR":null,"Shared":false},"Region":"us-west2","CIDR":"10.0.32.0/20","StackType":"IPV4_ONLY","Ipv6AccessType":null,"SecondaryIpRanges":{},"Shared":false}
I0128 07:45:25.419376 54184 changes.go:81] Field changed "CIDR" actual="10.0.16.0/20" expected="10.0.32.0/20"
I0128 07:45:25.538465 54184 changes.go:154] comparing maps: k8s-io-cluster-name wanderingwires-k8s-local wanderingwires-k8s-local
I0128 07:45:25.538508 54184 changes.go:154] comparing maps: name api api
W0128 07:45:25.538550 54184 executor.go:139] error running task "Subnet/us-west2-wanderingwires-k8s-local" (9m59s remaining to succeed): cannot apply changes to Subnet: *gcetasks.Subnet {"Name":null,"Lifecycle":"","Network":null,"Region":null,"CIDR":"10.0.32.0/20","StackType":null,"Ipv6AccessType":null,"SecondaryIpRanges":null,"Shared":null}
I0128 07:45:25.538589 54184 executor.go:111] Tasks: 67 done / 72 total; 1 can run
I0128 07:45:25.538616 54184 executor.go:192] Executing task "Subnet/us-west2-wanderingwires-k8s-local": *gcetasks.Subnet {"Name":"us-west2-wanderingwires-k8s-local","Lifecycle":"Sync","Network":{"Name":"wanderingwires-k8s-local","Project":null,"Lifecycle":"Sync","Mode":"custom","CIDR":null,"Shared":false},"Region":"us-west2","CIDR":"10.0.32.0/20","StackType":"IPV4_ONLY","Ipv6AccessType":null,"SecondaryIpRanges":{},"Shared":false}
I0128 07:45:25.635855 54184 changes.go:81] Field changed "CIDR" actual="10.0.16.0/20" expected="10.0.32.0/20"
W0128 07:45:25.636007 54184 executor.go:139] error running task "Subnet/us-west2-wanderingwires-k8s-local" (9m59s remaining to succeed): cannot apply changes to Subnet: *gcetasks.Subnet {"Name":null,"Lifecycle":"","Network":null,"Region":null,"CIDR":"10.0.32.0/20","StackType":null,"Ipv6AccessType":null,"SecondaryIpRanges":null,"Shared":null}
I0128 07:45:25.636039 54184 executor.go:155] No progress made, sleeping before retrying 1 task(s)
I0128 07:45:35.639865 54184 executor.go:111] Tasks: 67 done / 72 total; 1 can run
I0128 07:45:35.639921 54184 executor.go:192] Executing task "Subnet/us-west2-wanderingwires-k8s-local": *gcetasks.Subnet {"Name":"us-west2-wanderingwires-k8s-local","Lifecycle":"Sync","Network":{"Name":"wanderingwires-k8s-local","Project":null,"Lifecycle":"Sync","Mode":"custom","CIDR":null,"Shared":false},"Region":"us-west2","CIDR":"10.0.32.0/20","StackType":"IPV4_ONLY","Ipv6AccessType":null,"SecondaryIpRanges":{},"Shared":false}
I0128 07:45:35.741331 54184 changes.go:81] Field changed "CIDR" actual="10.0.16.0/20" expected="10.0.32.0/20"
W0128 07:45:35.741449 54184 executor.go:139] error running task "Subnet/us-west2-wanderingwires-k8s-local" (9m49s remaining to succeed): cannot apply changes to Subnet: *gcetasks.Subnet {"Name":null,"Lifecycle":"","Network":null,"Region":null,"CIDR":"10.0.32.0/20","StackType":null,"Ipv6AccessType":null,"SecondaryIpRanges":null,"Shared":null}
I0128 07:45:35.741469 54184 executor.go:155] No progress made, sleeping before retrying 1 task(s)
I0128 07:45:45.741874 54184 executor.go:111] Tasks: 67 done / 72 total; 1 can run
I0128 07:45:45.741949 54184 executor.go:192] Executing task "Subnet/us-west2-wanderingwires-k8s-local": *gcetasks.Subnet {"Name":"us-west2-wanderingwires-k8s-local","Lifecycle":"Sync","Network":{"Name":"wanderingwires-k8s-local","Project":null,"Lifecycle":"Sync","Mode":"custom","CIDR":null,"Shared":false},"Region":"us-west2","CIDR":"10.0.32.0/20","StackType":"IPV4_ONLY","Ipv6AccessType":null,"SecondaryIpRanges":{},"Shared":false}
I0128 07:45:45.866951 54184 changes.go:81] Field changed "CIDR" actual="10.0.16.0/20" expected="10.0.32.0/20"
W0128 07:45:45.867093 54184 executor.go:139] error running task "Subnet/us-west2-wanderingwires-k8s-local" (9m39s remaining to succeed): cannot apply changes to Subnet: *gcetasks.Subnet {"Name":null,"Lifecycle":"","Network":null,"Region":null,"CIDR":"10.0.32.0/20","StackType":null,"Ipv6AccessType":null,"SecondaryIpRanges":null,"Shared":null}
I0128 07:45:45.867123 54184 executor.go:155] No progress made, sleeping before retrying 1 task(s)
I0128 07:45:55.867885 54184 executor.go:111] Tasks: 67 done / 72 total; 1 can run
I0128 07:45:55.867940 54184 executor.go:192] Executing task "Subnet/us-west2-wanderingwires-k8s-local": *gcetasks.Subnet {"Name":"us-west2-wanderingwires-k8s-local","Lifecycle":"Sync","Network":{"Name":"wanderingwires-k8s-local","Project":null,"Lifecycle":"Sync","Mode":"custom","CIDR":null,"Shared":false},"Region":"us-west2","CIDR":"10.0.32.0/20","StackType":"IPV4_ONLY","Ipv6AccessType":null,"SecondaryIpRanges":{},"Shared":false}
I0128 07:45:55.968624 54184 changes.go:81] Field changed "CIDR" actual="10.0.16.0/20" expected="10.0.32.0/20"
W0128 07:45:55.968793 54184 executor.go:139] error running task "Subnet/us-west2-wanderingwires-k8s-local" (9m29s remaining to succeed): cannot apply changes to Subnet: *gcetasks.Subnet {"Name":null,"Lifecycle":"","Network":null,"Region":null,"CIDR":"10.0.32.0/20","StackType":null,"Ipv6AccessType":null,"SecondaryIpRanges":null,"Shared":null}
I0128 07:45:55.968813 54184 executor.go:155] No progress made, sleeping before retrying 1 task(s)
I0128 07:46:05.971956 54184 executor.go:111] Tasks: 67 done / 72 total; 1 can run
I0128 07:46:05.972017 54184 executor.go:192] Executing task "Subnet/us-west2-wanderingwires-k8s-local": *gcetasks.Subnet {"Name":"us-west2-wanderingwires-k8s-local","Lifecycle":"Sync","Network":{"Name":"wanderingwires-k8s-local","Project":null,"Lifecycle":"Sync","Mode":"custom","CIDR":null,"Shared":false},"Region":"us-west2","CIDR":"10.0.32.0/20","StackType":"IPV4_ONLY","Ipv6AccessType":null,"SecondaryIpRanges":{},"Shared":false}
I0128 07:46:06.065399 54184 changes.go:81] Field changed "CIDR" actual="10.0.16.0/20" expected="10.0.32.0/20"
W0128 07:46:06.065553 54184 executor.go:139] error running task "Subnet/us-west2-wanderingwires-k8s-local" (9m19s remaining to succeed): cannot apply changes to Subnet: *gcetasks.Subnet {"Name":null,"Lifecycle":"","Network":null,"Region":null,"CIDR":"10.0.32.0/20","StackType":null,"Ipv6AccessType":null,"SecondaryIpRanges":null,"Shared":null}
I0128 07:46:06.065579 54184 executor.go:155] No progress made, sleeping before retrying 1 task(s)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment