Skip to content

Instantly share code, notes, and snippets.

@mauilion
Created July 6, 2021 21:05
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save mauilion/89f345507846801ef85bdebc4bd2529f to your computer and use it in GitHub Desktop.
Save mauilion/89f345507846801ef85bdebc4bd2529f to your computer and use it in GitHub Desktop.
*
* ==> Audit <==
* |----------|-----------------------------------|---------|---------|---------|-------------------------------|-------------------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|----------|-----------------------------------|---------|---------|---------|-------------------------------|-------------------------------|
| ssh | -- cat | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 12:00:38 PDT | Tue, 06 Jul 2021 12:00:39 PDT |
| | /etc/cni/net.d/10-calico.conflist | | | | | |
| ssh | -- cat | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 12:01:57 PDT | Tue, 06 Jul 2021 12:01:57 PDT |
| | /etc/cni/net.d/10-calico.conflist | | | | | |
| ssh | -- cat | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 12:02:25 PDT | Tue, 06 Jul 2021 12:02:26 PDT |
| | /etc/cni/net.d/10-calico.conflist | | | | | |
| ssh | -- cat | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 12:09:02 PDT | Tue, 06 Jul 2021 12:09:03 PDT |
| | /etc/cni/net.d/10-calico* | | | | | |
| ssh | | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 12:16:04 PDT | Tue, 06 Jul 2021 12:18:03 PDT |
| delete | | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 12:18:18 PDT | Tue, 06 Jul 2021 12:18:21 PDT |
| start | | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 12:18:30 PDT | Tue, 06 Jul 2021 12:20:40 PDT |
| ssh | | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 12:24:56 PDT | Tue, 06 Jul 2021 12:25:18 PDT |
| delete | | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 12:30:27 PDT | Tue, 06 Jul 2021 12:30:30 PDT |
| start | | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 12:30:33 PDT | Tue, 06 Jul 2021 12:32:49 PDT |
| delete | | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 12:59:30 PDT | Tue, 06 Jul 2021 12:59:33 PDT |
| start | | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:04:56 PDT | Tue, 06 Jul 2021 13:07:07 PDT |
| ssh | -- sudo pkill -HUP kubelet | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:16:41 PDT | Tue, 06 Jul 2021 13:16:41 PDT |
| ssh | -n calium-m02 -- sudo pkill | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:16:58 PDT | Tue, 06 Jul 2021 13:16:58 PDT |
| | -HUP kubelet | | | | | |
| ssh | -n calium-m03 -- sudo pkill | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:17:02 PDT | Tue, 06 Jul 2021 13:17:02 PDT |
| | -HUP kubelet | | | | | |
| ssh | | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:18:59 PDT | Tue, 06 Jul 2021 13:23:48 PDT |
| ssh | | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:25:16 PDT | Tue, 06 Jul 2021 13:28:28 PDT |
| ssh | -n calium-m02 -- sudo pkill | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:28:57 PDT | Tue, 06 Jul 2021 13:28:57 PDT |
| | -HUP containerd | | | | | |
| ssh | -n calium-m03 -- sudo pkill | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:29:01 PDT | Tue, 06 Jul 2021 13:29:01 PDT |
| | -HUP containerd | | | | | |
| ssh | | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:29:37 PDT | Tue, 06 Jul 2021 13:30:30 PDT |
| ssh | -n calium-m02 sudo systemctl | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:30:50 PDT | Tue, 06 Jul 2021 13:30:51 PDT |
| | restart containerd | | | | | |
| ssh | -n calium-m03 sudo systemctl | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:30:55 PDT | Tue, 06 Jul 2021 13:30:55 PDT |
| | restart containerd | | | | | |
| ssh | sudo crictl ps | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:31:08 PDT | Tue, 06 Jul 2021 13:31:09 PDT |
| ssh | sudo crictl ps | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:31:10 PDT | Tue, 06 Jul 2021 13:31:10 PDT |
| ssh | sudo crictl ps | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:31:11 PDT | Tue, 06 Jul 2021 13:31:11 PDT |
| ssh | sudo crictl ps | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:31:13 PDT | Tue, 06 Jul 2021 13:31:13 PDT |
| ssh | sudo crictl ps | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:31:14 PDT | Tue, 06 Jul 2021 13:31:14 PDT |
| ssh | sudo crictl ps | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:31:14 PDT | Tue, 06 Jul 2021 13:31:15 PDT |
| ssh | sudo crictl ps | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:31:15 PDT | Tue, 06 Jul 2021 13:31:15 PDT |
| ssh | sudo crictl ps | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:31:17 PDT | Tue, 06 Jul 2021 13:31:17 PDT |
| ssh | sudo crictl ps | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:31:18 PDT | Tue, 06 Jul 2021 13:31:18 PDT |
| ssh | sudo systemctl restart kubelet | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:31:24 PDT | Tue, 06 Jul 2021 13:31:25 PDT |
| ssh | sudo crictl ps | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:31:27 PDT | Tue, 06 Jul 2021 13:31:27 PDT |
| ssh | sudo crictl ps | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:31:28 PDT | Tue, 06 Jul 2021 13:31:29 PDT |
| ssh | sudo crictl ps | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:31:29 PDT | Tue, 06 Jul 2021 13:31:30 PDT |
| ssh | sudo crictl ps | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:31:31 PDT | Tue, 06 Jul 2021 13:31:31 PDT |
| ssh | sudo crictl ps | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:31:32 PDT | Tue, 06 Jul 2021 13:31:32 PDT |
| ssh | sudo crictl ps | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:31:33 PDT | Tue, 06 Jul 2021 13:31:33 PDT |
| ssh | sudo crictl ps | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:31:34 PDT | Tue, 06 Jul 2021 13:31:35 PDT |
| ssh | -- sudo systemctl restart | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:32:09 PDT | Tue, 06 Jul 2021 13:32:09 PDT |
| | kubelet | | | | | |
| ssh | -- sudo crictl ps | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:32:16 PDT | Tue, 06 Jul 2021 13:32:16 PDT |
| ssh | -n calium-m02 -- sudo crictl | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:32:48 PDT | Tue, 06 Jul 2021 13:32:49 PDT |
| | ps | | | | | |
| ssh | -n calium-m03 -- sudo crictl | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:32:53 PDT | Tue, 06 Jul 2021 13:32:53 PDT |
| | ps | | | | | |
| ssh | -- ls /etc/cni/net.d/ | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:35:02 PDT | Tue, 06 Jul 2021 13:35:02 PDT |
| delete | | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:37:31 PDT | Tue, 06 Jul 2021 13:37:34 PDT |
| start | | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:37:37 PDT | Tue, 06 Jul 2021 13:39:50 PDT |
| ssh-host | | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:45:21 PDT | Tue, 06 Jul 2021 13:45:21 PDT |
| ssh-key | | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:45:25 PDT | Tue, 06 Jul 2021 13:45:25 PDT |
| config | view | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:45:54 PDT | Tue, 06 Jul 2021 13:45:54 PDT |
| profile | list | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:46:06 PDT | Tue, 06 Jul 2021 13:46:06 PDT |
| ssh | | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:46:22 PDT | Tue, 06 Jul 2021 13:47:29 PDT |
| ssh | -n calium-m02 -n calium-m03 | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:48:58 PDT | Tue, 06 Jul 2021 13:48:58 PDT |
| | date | | | | | |
| ssh | -n calium-m02 -- sudo rm | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:49:19 PDT | Tue, 06 Jul 2021 13:49:19 PDT |
| | /etc/cni/net.d/10* | | | | | |
| ssh | -n calium-m02 -- sudo rm | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:49:24 PDT | Tue, 06 Jul 2021 13:49:24 PDT |
| | /etc/cni/net.d/87* | | | | | |
| ssh | -n calium-m02 -- sudo ls | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:49:34 PDT | Tue, 06 Jul 2021 13:49:35 PDT |
| | /etc/cni/net.d | | | | | |
| ssh | -n calium-m03 -- sudo rm | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:50:13 PDT | Tue, 06 Jul 2021 13:50:13 PDT |
| | /etc/cni/net.d/10* | | | | | |
| ssh | -n calium-m03 -- sudo rm | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:50:18 PDT | Tue, 06 Jul 2021 13:50:18 PDT |
| | /etc/cni/net.d/87* | | | | | |
| ssh | -n calium-m03 -- sudo ls | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 13:50:29 PDT | Tue, 06 Jul 2021 13:50:30 PDT |
| | /etc/cni/net.d/ | | | | | |
| logs | | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 14:04:58 PDT | Tue, 06 Jul 2021 14:04:59 PDT |
| logs | | calium | dcooley | v1.21.0 | Tue, 06 Jul 2021 14:05:07 PDT | Tue, 06 Jul 2021 14:05:07 PDT |
|----------|-----------------------------------|---------|---------|---------|-------------------------------|-------------------------------|
*
* ==> Last Start <==
* Log file created at: 2021/07/06 13:37:37
Running on machine: lynx
Binary: Built with gc go1.16.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0706 13:37:37.033427 3065941 out.go:291] Setting OutFile to fd 1 ...
I0706 13:37:37.033541 3065941 out.go:343] isatty.IsTerminal(1) = true
I0706 13:37:37.033543 3065941 out.go:304] Setting ErrFile to fd 2...
I0706 13:37:37.033546 3065941 out.go:343] isatty.IsTerminal(2) = true
I0706 13:37:37.033609 3065941 root.go:316] Updating PATH: /home/dcooley/.minikube/bin
I0706 13:37:37.033772 3065941 out.go:298] Setting JSON to false
I0706 13:37:37.052527 3065941 start.go:111] hostinfo: {"hostname":"lynx","uptime":181051,"bootTime":1625422806,"procs":418,"os":"linux","platform":"arch","platformFamily":"arch","platformVersion":"","kernelVersion":"5.13.0-AMD-znver2","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"d7fa226b-6c15-4fad-8de8-231679e909b1"}
I0706 13:37:37.052581 3065941 start.go:121] virtualization: kvm host
I0706 13:37:37.064920 3065941 out.go:170] 😄 [calium] minikube v1.21.0 on Arch
I0706 13:37:37.066284 3065941 out.go:170] ▪ MINIKUBE_CNI=calico
I0706 13:37:37.065098 3065941 notify.go:169] Checking for updates...
I0706 13:37:37.067456 3065941 out.go:170] ▪ MINIKUBE_NODES=3
I0706 13:37:37.068733 3065941 out.go:170] ▪ MINIKUBE_PROFILE=calium
I0706 13:37:37.069822 3065941 out.go:170] ▪ MINIKUBE_NETWORK=calium
I0706 13:37:37.070532 3065941 driver.go:335] Setting default libvirt URI to qemu:///system
I0706 13:37:37.154811 3065941 out.go:170] ✨ Using the kvm2 driver based on user configuration
I0706 13:37:37.154854 3065941 start.go:279] selected driver: kvm2
I0706 13:37:37.154863 3065941 start.go:752] validating driver "kvm2" against <nil>
I0706 13:37:37.154890 3065941 start.go:763] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0706 13:37:37.154979 3065941 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0706 13:37:37.155197 3065941 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/dcooley/.minikube/bin:/opt/google-cloud-sdk/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/home/dcooley/go/bin:/home/dcooley/.node_modules/bin/:/home/dcooley/.krew/bin
I0706 13:37:37.182035 3065941 install.go:137] /home/dcooley/.minikube/bin/docker-machine-driver-kvm2 version is 1.21.0
I0706 13:37:37.182118 3065941 start_flags.go:259] no existing cluster config was found, will generate one from the flags
I0706 13:37:37.202472 3065941 start_flags.go:638] Wait components to verify : map[apiserver:true system_pods:true]
I0706 13:37:37.202494 3065941 cni.go:93] Creating CNI manager for "calico"
I0706 13:37:37.202502 3065941 start_flags.go:268] Found "Calico" CNI - setting NetworkPlugin=cni
I0706 13:37:37.202511 3065941 start_flags.go:273] config:
{Name:calium KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:2560 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:calium Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:calium MultiNodeRequested:true}
I0706 13:37:37.202608 3065941 iso.go:123] acquiring lock: {Name:mk03b7a6e13b71c1e6f890c9ec2bbcddabd08367 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0706 13:37:37.203852 3065941 out.go:170] 👍 Starting control plane node calium in cluster calium
I0706 13:37:37.203891 3065941 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime containerd
I0706 13:37:37.203934 3065941 preload.go:125] Found local preload: /home/dcooley/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-containerd-overlay2-amd64.tar.lz4
I0706 13:37:37.203949 3065941 cache.go:54] Caching tarball of preloaded images
I0706 13:37:37.204079 3065941 preload.go:166] Found /home/dcooley/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0706 13:37:37.204108 3065941 cache.go:57] Finished verifying existence of preloaded tar for v1.20.7 on containerd
I0706 13:37:37.204269 3065941 profile.go:148] Saving config to /home/dcooley/.minikube/profiles/calium/config.json ...
I0706 13:37:37.204296 3065941 lock.go:36] WriteFile acquiring /home/dcooley/.minikube/profiles/calium/config.json: {Name:mk838ac3ca0b84d8e9d49fcdb9a09ac0a2a059e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0706 13:37:37.204511 3065941 cache.go:202] Successfully downloaded all kic artifacts
I0706 13:37:37.204537 3065941 start.go:313] acquiring machines lock for calium: {Name:mk713ea1bc47ea454143c3d059dd7f13e11b4c0a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0706 13:37:37.204593 3065941 start.go:317] acquired machines lock for "calium" in 44.133µs
I0706 13:37:37.204606 3065941 start.go:89] Provisioning new machine with config: &{Name:calium KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.21.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:2560 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:calium Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:calium MultiNodeRequested:true} &{Name: IP: Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}
I0706 13:37:37.204677 3065941 start.go:126] createHost starting for "" (driver="kvm2")
I0706 13:37:37.205749 3065941 out.go:197] 🔥 Creating kvm2 VM (CPUs=2, Memory=2560MB, Disk=20000MB) ...
I0706 13:37:37.205957 3065941 main.go:128] libmachine: Found binary path at /home/dcooley/.minikube/bin/docker-machine-driver-kvm2
I0706 13:37:37.206004 3065941 main.go:128] libmachine: Launching plugin server for driver kvm2
I0706 13:37:37.227444 3065941 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:34363
I0706 13:37:37.227861 3065941 main.go:128] libmachine: () Calling .GetVersion
I0706 13:37:37.228477 3065941 main.go:128] libmachine: Using API Version 1
I0706 13:37:37.228493 3065941 main.go:128] libmachine: () Calling .SetConfigRaw
I0706 13:37:37.228804 3065941 main.go:128] libmachine: () Calling .GetMachineName
I0706 13:37:37.228937 3065941 main.go:128] libmachine: (calium) Calling .GetMachineName
I0706 13:37:37.229064 3065941 main.go:128] libmachine: (calium) Calling .DriverName
I0706 13:37:37.229152 3065941 start.go:160] libmachine.API.Create for "calium" (driver="kvm2")
I0706 13:37:37.229166 3065941 client.go:168] LocalClient.Create starting
I0706 13:37:37.229251 3065941 main.go:128] libmachine: Reading certificate data from /home/dcooley/.minikube/certs/ca.pem
I0706 13:37:37.229295 3065941 main.go:128] libmachine: Decoding PEM data...
I0706 13:37:37.229307 3065941 main.go:128] libmachine: Parsing certificate...
I0706 13:37:37.229405 3065941 main.go:128] libmachine: Reading certificate data from /home/dcooley/.minikube/certs/cert.pem
I0706 13:37:37.229413 3065941 main.go:128] libmachine: Decoding PEM data...
I0706 13:37:37.229421 3065941 main.go:128] libmachine: Parsing certificate...
I0706 13:37:37.229453 3065941 main.go:128] libmachine: Running pre-create checks...
I0706 13:37:37.229458 3065941 main.go:128] libmachine: (calium) Calling .PreCreateCheck
I0706 13:37:37.229664 3065941 main.go:128] libmachine: (calium) Calling .GetConfigRaw
I0706 13:37:37.230067 3065941 main.go:128] libmachine: Creating machine...
I0706 13:37:37.230080 3065941 main.go:128] libmachine: (calium) Calling .Create
I0706 13:37:37.230169 3065941 main.go:128] libmachine: (calium) Creating KVM machine...
I0706 13:37:37.237602 3065941 main.go:128] libmachine: (calium) DBG | found existing default KVM network
I0706 13:37:37.238692 3065941 main.go:128] libmachine: (calium) DBG | I0706 13:37:37.238366 3065972 network.go:215] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b0:39:91}}
I0706 13:37:37.239145 3065941 main.go:128] libmachine: (calium) DBG | I0706 13:37:37.239047 3065972 network.go:263] reserving subnet 192.168.50.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.50.0:0xc0000105e8] misses:0}
I0706 13:37:37.239188 3065941 main.go:128] libmachine: (calium) DBG | I0706 13:37:37.239065 3065972 network.go:210] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0706 13:37:37.242146 3065941 main.go:128] libmachine: (calium) DBG | trying to create private KVM network calium 192.168.50.0/24...
I0706 13:37:37.310742 3065941 main.go:128] libmachine: (calium) DBG | private KVM network calium 192.168.50.0/24 created
I0706 13:37:37.310777 3065941 main.go:128] libmachine: (calium) Setting up store path in /home/dcooley/.minikube/machines/calium ...
I0706 13:37:37.310791 3065941 main.go:128] libmachine: (calium) DBG | I0706 13:37:37.310689 3065972 common.go:101] Making disk image using store path: /home/dcooley/.minikube
I0706 13:37:37.310809 3065941 main.go:128] libmachine: (calium) Building disk image from file:///home/dcooley/.minikube/cache/iso/minikube-v1.21.0.iso
I0706 13:37:37.310822 3065941 main.go:128] libmachine: (calium) Downloading /home/dcooley/.minikube/cache/boot2docker.iso from file:///home/dcooley/.minikube/cache/iso/minikube-v1.21.0.iso...
I0706 13:37:37.800110 3065941 main.go:128] libmachine: (calium) DBG | I0706 13:37:37.799934 3065972 common.go:108] Creating ssh key: /home/dcooley/.minikube/machines/calium/id_rsa...
I0706 13:37:37.841573 3065941 main.go:128] libmachine: (calium) DBG | I0706 13:37:37.841386 3065972 common.go:114] Creating raw disk image: /home/dcooley/.minikube/machines/calium/calium.rawdisk...
I0706 13:37:37.841594 3065941 main.go:128] libmachine: (calium) DBG | Writing magic tar header
I0706 13:37:37.841611 3065941 main.go:128] libmachine: (calium) Setting executable bit set on /home/dcooley/.minikube/machines/calium (perms=drwx------)
I0706 13:37:37.841624 3065941 main.go:128] libmachine: (calium) DBG | Writing SSH key tar header
I0706 13:37:37.841650 3065941 main.go:128] libmachine: (calium) DBG | I0706 13:37:37.841455 3065972 common.go:128] Fixing permissions on /home/dcooley/.minikube/machines/calium ...
I0706 13:37:37.841677 3065941 main.go:128] libmachine: (calium) DBG | Checking permissions on dir: /home/dcooley/.minikube/machines/calium
I0706 13:37:37.841691 3065941 main.go:128] libmachine: (calium) Setting executable bit set on /home/dcooley/.minikube/machines (perms=drwxr-xr-x)
I0706 13:37:37.841713 3065941 main.go:128] libmachine: (calium) Setting executable bit set on /home/dcooley/.minikube (perms=drwxr-xr-x)
I0706 13:37:37.841726 3065941 main.go:128] libmachine: (calium) Setting executable bit set on /home/dcooley (perms=drwx--x--x)
I0706 13:37:37.841742 3065941 main.go:128] libmachine: (calium) Creating domain...
I0706 13:37:37.841756 3065941 main.go:128] libmachine: (calium) DBG | Checking permissions on dir: /home/dcooley/.minikube/machines
I0706 13:37:37.841772 3065941 main.go:128] libmachine: (calium) DBG | Checking permissions on dir: /home/dcooley/.minikube
I0706 13:37:37.841787 3065941 main.go:128] libmachine: (calium) DBG | Checking permissions on dir: /home/dcooley
I0706 13:37:37.841801 3065941 main.go:128] libmachine: (calium) DBG | Checking permissions on dir: /home
I0706 13:37:37.841811 3065941 main.go:128] libmachine: (calium) DBG | Skipping /home - not owner
I0706 13:37:37.885704 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:ec:e4:ee in network default
I0706 13:37:37.886146 3065941 main.go:128] libmachine: (calium) Ensuring networks are active...
I0706 13:37:37.886178 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:37.896473 3065941 main.go:128] libmachine: (calium) Ensuring network default is active
I0706 13:37:37.896755 3065941 main.go:128] libmachine: (calium) Ensuring network calium is active
I0706 13:37:37.897359 3065941 main.go:128] libmachine: (calium) Getting domain xml...
I0706 13:37:37.908171 3065941 main.go:128] libmachine: (calium) Creating domain...
I0706 13:37:38.562239 3065941 main.go:128] libmachine: (calium) Waiting to get IP...
I0706 13:37:38.563706 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:38.564391 3065941 main.go:128] libmachine: (calium) DBG | unable to find current IP address of domain calium in network calium
I0706 13:37:38.564421 3065941 main.go:128] libmachine: (calium) DBG | I0706 13:37:38.564333 3065972 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
I0706 13:37:38.829824 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:38.830711 3065941 main.go:128] libmachine: (calium) DBG | unable to find current IP address of domain calium in network calium
I0706 13:37:38.830748 3065941 main.go:128] libmachine: (calium) DBG | I0706 13:37:38.830560 3065972 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
I0706 13:37:39.213759 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:39.214441 3065941 main.go:128] libmachine: (calium) DBG | unable to find current IP address of domain calium in network calium
I0706 13:37:39.214472 3065941 main.go:128] libmachine: (calium) DBG | I0706 13:37:39.214355 3065972 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
I0706 13:37:39.639423 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:39.640443 3065941 main.go:128] libmachine: (calium) DBG | unable to find current IP address of domain calium in network calium
I0706 13:37:39.640476 3065941 main.go:128] libmachine: (calium) DBG | I0706 13:37:39.640359 3065972 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
I0706 13:37:40.115184 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:40.115843 3065941 main.go:128] libmachine: (calium) DBG | unable to find current IP address of domain calium in network calium
I0706 13:37:40.115875 3065941 main.go:128] libmachine: (calium) DBG | I0706 13:37:40.115775 3065972 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
I0706 13:37:40.705576 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:40.706520 3065941 main.go:128] libmachine: (calium) DBG | unable to find current IP address of domain calium in network calium
I0706 13:37:40.706558 3065941 main.go:128] libmachine: (calium) DBG | I0706 13:37:40.706408 3065972 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
I0706 13:37:41.542892 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:41.543315 3065941 main.go:128] libmachine: (calium) DBG | unable to find current IP address of domain calium in network calium
I0706 13:37:41.543332 3065941 main.go:128] libmachine: (calium) DBG | I0706 13:37:41.543274 3065972 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
I0706 13:37:42.292403 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:42.293149 3065941 main.go:128] libmachine: (calium) DBG | unable to find current IP address of domain calium in network calium
I0706 13:37:42.293167 3065941 main.go:128] libmachine: (calium) DBG | I0706 13:37:42.293088 3065972 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
I0706 13:37:43.282054 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:43.282727 3065941 main.go:128] libmachine: (calium) DBG | unable to find current IP address of domain calium in network calium
I0706 13:37:43.282751 3065941 main.go:128] libmachine: (calium) DBG | I0706 13:37:43.282682 3065972 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
I0706 13:37:44.474000 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:44.474376 3065941 main.go:128] libmachine: (calium) DBG | unable to find current IP address of domain calium in network calium
I0706 13:37:44.474402 3065941 main.go:128] libmachine: (calium) DBG | I0706 13:37:44.474327 3065972 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
I0706 13:37:46.155050 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:46.156029 3065941 main.go:128] libmachine: (calium) DBG | unable to find current IP address of domain calium in network calium
I0706 13:37:46.156054 3065941 main.go:128] libmachine: (calium) DBG | I0706 13:37:46.155929 3065972 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
I0706 13:37:48.505278 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:48.506517 3065941 main.go:128] libmachine: (calium) Found IP for machine: 192.168.50.9
I0706 13:37:48.506538 3065941 main.go:128] libmachine: (calium) Reserving static IP address...
I0706 13:37:48.506556 3065941 main.go:128] libmachine: (calium) DBG | domain calium has current primary IP address 192.168.50.9 and MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:48.507358 3065941 main.go:128] libmachine: (calium) DBG | unable to find host DHCP lease matching {name: "calium", mac: "52:54:00:f9:db:49", ip: "192.168.50.9"} in network calium
I0706 13:37:48.675844 3065941 main.go:128] libmachine: (calium) DBG | Getting to WaitForSSH function...
I0706 13:37:48.675865 3065941 main.go:128] libmachine: (calium) Reserved static IP address: 192.168.50.9
I0706 13:37:48.675881 3065941 main.go:128] libmachine: (calium) Waiting for SSH to be available...
I0706 13:37:48.702797 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:48.703361 3065941 main.go:128] libmachine: (calium) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:db:49", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:37:46 -0700 PDT Type:0 Mac:52:54:00:f9:db:49 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f9:db:49}
I0706 13:37:48.703395 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined IP address 192.168.50.9 and MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:48.703499 3065941 main.go:128] libmachine: (calium) DBG | Using SSH client type: external
I0706 13:37:48.703568 3065941 main.go:128] libmachine: (calium) DBG | Using SSH private key: /home/dcooley/.minikube/machines/calium/id_rsa (-rw-------)
I0706 13:37:48.703625 3065941 main.go:128] libmachine: (calium) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.9 -o IdentitiesOnly=yes -i /home/dcooley/.minikube/machines/calium/id_rsa -p 22] /usr/bin/ssh <nil>}
I0706 13:37:48.703652 3065941 main.go:128] libmachine: (calium) DBG | About to run SSH command:
I0706 13:37:48.703665 3065941 main.go:128] libmachine: (calium) DBG | exit 0
I0706 13:37:48.866151 3065941 main.go:128] libmachine: (calium) DBG | SSH cmd err, output: <nil>:
I0706 13:37:48.866729 3065941 main.go:128] libmachine: (calium) KVM machine creation complete!
I0706 13:37:48.866858 3065941 main.go:128] libmachine: (calium) Calling .GetConfigRaw
I0706 13:37:48.867573 3065941 main.go:128] libmachine: (calium) Calling .DriverName
I0706 13:37:48.867955 3065941 main.go:128] libmachine: (calium) Calling .DriverName
I0706 13:37:48.868250 3065941 main.go:128] libmachine: Waiting for machine to be running, this may take a few minutes...
I0706 13:37:48.868269 3065941 main.go:128] libmachine: (calium) Calling .GetState
I0706 13:37:48.879255 3065941 main.go:128] libmachine: Detecting operating system of created instance...
I0706 13:37:48.879272 3065941 main.go:128] libmachine: Waiting for SSH to be available...
I0706 13:37:48.879282 3065941 main.go:128] libmachine: Getting to WaitForSSH function...
I0706 13:37:48.879294 3065941 main.go:128] libmachine: (calium) Calling .GetSSHHostname
I0706 13:37:48.899849 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:48.900287 3065941 main.go:128] libmachine: (calium) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:db:49", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:37:46 -0700 PDT Type:0 Mac:52:54:00:f9:db:49 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:calium Clientid:01:52:54:00:f9:db:49}
I0706 13:37:48.900313 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined IP address 192.168.50.9 and MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:48.900612 3065941 main.go:128] libmachine: (calium) Calling .GetSSHPort
I0706 13:37:48.900944 3065941 main.go:128] libmachine: (calium) Calling .GetSSHKeyPath
I0706 13:37:48.901239 3065941 main.go:128] libmachine: (calium) Calling .GetSSHKeyPath
I0706 13:37:48.901512 3065941 main.go:128] libmachine: (calium) Calling .GetSSHUsername
I0706 13:37:48.901771 3065941 main.go:128] libmachine: Using SSH client type: native
I0706 13:37:48.902010 3065941 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x55579c4154c0] 0x55579c415480 <nil> [] 0s} 192.168.50.9 22 <nil> <nil>}
I0706 13:37:48.902021 3065941 main.go:128] libmachine: About to run SSH command:
exit 0
I0706 13:37:49.020934 3065941 main.go:128] libmachine: SSH cmd err, output: <nil>:
I0706 13:37:49.020944 3065941 main.go:128] libmachine: Detecting the provisioner...
I0706 13:37:49.020948 3065941 main.go:128] libmachine: (calium) Calling .GetSSHHostname
I0706 13:37:49.036760 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:49.037140 3065941 main.go:128] libmachine: (calium) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:db:49", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:37:46 -0700 PDT Type:0 Mac:52:54:00:f9:db:49 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:calium Clientid:01:52:54:00:f9:db:49}
I0706 13:37:49.037161 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined IP address 192.168.50.9 and MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:49.037542 3065941 main.go:128] libmachine: (calium) Calling .GetSSHPort
I0706 13:37:49.037866 3065941 main.go:128] libmachine: (calium) Calling .GetSSHKeyPath
I0706 13:37:49.038134 3065941 main.go:128] libmachine: (calium) Calling .GetSSHKeyPath
I0706 13:37:49.038281 3065941 main.go:128] libmachine: (calium) Calling .GetSSHUsername
I0706 13:37:49.038434 3065941 main.go:128] libmachine: Using SSH client type: native
I0706 13:37:49.038625 3065941 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x55579c4154c0] 0x55579c415480 <nil> [] 0s} 192.168.50.9 22 <nil> <nil>}
I0706 13:37:49.038637 3065941 main.go:128] libmachine: About to run SSH command:
cat /etc/os-release
I0706 13:37:49.172452 3065941 main.go:128] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2020.02.12
ID=buildroot
VERSION_ID=2020.02.12
PRETTY_NAME="Buildroot 2020.02.12"
I0706 13:37:49.172531 3065941 main.go:128] libmachine: found compatible host: buildroot
I0706 13:37:49.172541 3065941 main.go:128] libmachine: Provisioning with buildroot...
I0706 13:37:49.172552 3065941 main.go:128] libmachine: (calium) Calling .GetMachineName
I0706 13:37:49.173028 3065941 buildroot.go:166] provisioning hostname "calium"
I0706 13:37:49.173058 3065941 main.go:128] libmachine: (calium) Calling .GetMachineName
I0706 13:37:49.173407 3065941 main.go:128] libmachine: (calium) Calling .GetSSHHostname
I0706 13:37:49.196631 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:49.197055 3065941 main.go:128] libmachine: (calium) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:db:49", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:37:46 -0700 PDT Type:0 Mac:52:54:00:f9:db:49 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:calium Clientid:01:52:54:00:f9:db:49}
I0706 13:37:49.197085 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined IP address 192.168.50.9 and MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:49.197403 3065941 main.go:128] libmachine: (calium) Calling .GetSSHPort
I0706 13:37:49.197780 3065941 main.go:128] libmachine: (calium) Calling .GetSSHKeyPath
I0706 13:37:49.198061 3065941 main.go:128] libmachine: (calium) Calling .GetSSHKeyPath
I0706 13:37:49.198243 3065941 main.go:128] libmachine: (calium) Calling .GetSSHUsername
I0706 13:37:49.198444 3065941 main.go:128] libmachine: Using SSH client type: native
I0706 13:37:49.198624 3065941 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x55579c4154c0] 0x55579c415480 <nil> [] 0s} 192.168.50.9 22 <nil> <nil>}
I0706 13:37:49.198637 3065941 main.go:128] libmachine: About to run SSH command:
sudo hostname calium && echo "calium" | sudo tee /etc/hostname
I0706 13:37:49.348239 3065941 main.go:128] libmachine: SSH cmd err, output: <nil>: calium
I0706 13:37:49.348280 3065941 main.go:128] libmachine: (calium) Calling .GetSSHHostname
I0706 13:37:49.371940 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:49.372514 3065941 main.go:128] libmachine: (calium) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:db:49", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:37:46 -0700 PDT Type:0 Mac:52:54:00:f9:db:49 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:calium Clientid:01:52:54:00:f9:db:49}
I0706 13:37:49.372548 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined IP address 192.168.50.9 and MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:49.372906 3065941 main.go:128] libmachine: (calium) Calling .GetSSHPort
I0706 13:37:49.373253 3065941 main.go:128] libmachine: (calium) Calling .GetSSHKeyPath
I0706 13:37:49.373410 3065941 main.go:128] libmachine: (calium) Calling .GetSSHKeyPath
I0706 13:37:49.373503 3065941 main.go:128] libmachine: (calium) Calling .GetSSHUsername
I0706 13:37:49.373620 3065941 main.go:128] libmachine: Using SSH client type: native
I0706 13:37:49.373786 3065941 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x55579c4154c0] 0x55579c415480 <nil> [] 0s} 192.168.50.9 22 <nil> <nil>}
I0706 13:37:49.373808 3065941 main.go:128] libmachine: About to run SSH command:
if ! grep -xq '.*\scalium' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calium/g' /etc/hosts;
else
echo '127.0.1.1 calium' | sudo tee -a /etc/hosts;
fi
fi
I0706 13:37:49.517976 3065941 main.go:128] libmachine: SSH cmd err, output: <nil>:
I0706 13:37:49.517998 3065941 buildroot.go:172] set auth options {CertDir:/home/dcooley/.minikube CaCertPath:/home/dcooley/.minikube/certs/ca.pem CaPrivateKeyPath:/home/dcooley/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/dcooley/.minikube/machines/server.pem ServerKeyPath:/home/dcooley/.minikube/machines/server-key.pem ClientKeyPath:/home/dcooley/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/dcooley/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/dcooley/.minikube}
I0706 13:37:49.518044 3065941 buildroot.go:174] setting up certificates
I0706 13:37:49.518055 3065941 provision.go:83] configureAuth start
I0706 13:37:49.518068 3065941 main.go:128] libmachine: (calium) Calling .GetMachineName
I0706 13:37:49.518568 3065941 main.go:128] libmachine: (calium) Calling .GetIP
I0706 13:37:49.541229 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:49.541645 3065941 main.go:128] libmachine: (calium) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:db:49", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:37:46 -0700 PDT Type:0 Mac:52:54:00:f9:db:49 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:calium Clientid:01:52:54:00:f9:db:49}
I0706 13:37:49.541683 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined IP address 192.168.50.9 and MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:49.541972 3065941 main.go:128] libmachine: (calium) Calling .GetSSHHostname
I0706 13:37:49.563883 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:49.564423 3065941 main.go:128] libmachine: (calium) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:db:49", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:37:46 -0700 PDT Type:0 Mac:52:54:00:f9:db:49 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:calium Clientid:01:52:54:00:f9:db:49}
I0706 13:37:49.564457 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined IP address 192.168.50.9 and MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:49.564705 3065941 provision.go:137] copyHostCerts
I0706 13:37:49.564816 3065941 exec_runner.go:145] found /home/dcooley/.minikube/ca.pem, removing ...
I0706 13:37:49.564829 3065941 exec_runner.go:190] rm: /home/dcooley/.minikube/ca.pem
I0706 13:37:49.564941 3065941 exec_runner.go:152] cp: /home/dcooley/.minikube/certs/ca.pem --> /home/dcooley/.minikube/ca.pem (1078 bytes)
I0706 13:37:49.565194 3065941 exec_runner.go:145] found /home/dcooley/.minikube/cert.pem, removing ...
I0706 13:37:49.565209 3065941 exec_runner.go:190] rm: /home/dcooley/.minikube/cert.pem
I0706 13:37:49.565282 3065941 exec_runner.go:152] cp: /home/dcooley/.minikube/certs/cert.pem --> /home/dcooley/.minikube/cert.pem (1123 bytes)
I0706 13:37:49.565378 3065941 exec_runner.go:145] found /home/dcooley/.minikube/key.pem, removing ...
I0706 13:37:49.565384 3065941 exec_runner.go:190] rm: /home/dcooley/.minikube/key.pem
I0706 13:37:49.565429 3065941 exec_runner.go:152] cp: /home/dcooley/.minikube/certs/key.pem --> /home/dcooley/.minikube/key.pem (1675 bytes)
I0706 13:37:49.565507 3065941 provision.go:111] generating server cert: /home/dcooley/.minikube/machines/server.pem ca-key=/home/dcooley/.minikube/certs/ca.pem private-key=/home/dcooley/.minikube/certs/ca-key.pem org=dcooley.calium san=[192.168.50.9 192.168.50.9 localhost 127.0.0.1 minikube calium]
I0706 13:37:49.753940 3065941 provision.go:171] copyRemoteCerts
I0706 13:37:49.753969 3065941 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0706 13:37:49.753983 3065941 main.go:128] libmachine: (calium) Calling .GetSSHHostname
I0706 13:37:49.776262 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:49.776672 3065941 main.go:128] libmachine: (calium) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:db:49", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:37:46 -0700 PDT Type:0 Mac:52:54:00:f9:db:49 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:calium Clientid:01:52:54:00:f9:db:49}
I0706 13:37:49.776703 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined IP address 192.168.50.9 and MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:49.776883 3065941 main.go:128] libmachine: (calium) Calling .GetSSHPort
I0706 13:37:49.777137 3065941 main.go:128] libmachine: (calium) Calling .GetSSHKeyPath
I0706 13:37:49.777512 3065941 main.go:128] libmachine: (calium) Calling .GetSSHUsername
I0706 13:37:49.777684 3065941 sshutil.go:53] new ssh client: &{IP:192.168.50.9 Port:22 SSHKeyPath:/home/dcooley/.minikube/machines/calium/id_rsa Username:docker}
I0706 13:37:49.875176 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0706 13:37:49.894225 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
I0706 13:37:49.914871 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0706 13:37:49.925374 3065941 provision.go:86] duration metric: configureAuth took 407.304016ms
I0706 13:37:49.925396 3065941 buildroot.go:189] setting minikube options for container-runtime
I0706 13:37:49.925646 3065941 main.go:128] libmachine: Checking connection to Docker...
I0706 13:37:49.925660 3065941 main.go:128] libmachine: (calium) Calling .GetURL
I0706 13:37:49.933604 3065941 main.go:128] libmachine: (calium) DBG | Using libvirt version 7003000
I0706 13:37:49.949737 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:49.949912 3065941 main.go:128] libmachine: (calium) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:db:49", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:37:46 -0700 PDT Type:0 Mac:52:54:00:f9:db:49 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:calium Clientid:01:52:54:00:f9:db:49}
I0706 13:37:49.949938 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined IP address 192.168.50.9 and MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:49.950139 3065941 main.go:128] libmachine: Docker is up and running!
I0706 13:37:49.950153 3065941 main.go:128] libmachine: Reticulating splines...
I0706 13:37:49.950162 3065941 client.go:171] LocalClient.Create took 12.720988859s
I0706 13:37:49.950179 3065941 start.go:168] duration metric: libmachine.API.Create for "calium" took 12.721025258s
I0706 13:37:49.950187 3065941 start.go:267] post-start starting for "calium" (driver="kvm2")
I0706 13:37:49.950193 3065941 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0706 13:37:49.950215 3065941 main.go:128] libmachine: (calium) Calling .DriverName
I0706 13:37:49.950408 3065941 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0706 13:37:49.950429 3065941 main.go:128] libmachine: (calium) Calling .GetSSHHostname
I0706 13:37:49.967719 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:49.968220 3065941 main.go:128] libmachine: (calium) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:db:49", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:37:46 -0700 PDT Type:0 Mac:52:54:00:f9:db:49 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:calium Clientid:01:52:54:00:f9:db:49}
I0706 13:37:49.968250 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined IP address 192.168.50.9 and MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:49.968401 3065941 main.go:128] libmachine: (calium) Calling .GetSSHPort
I0706 13:37:49.968581 3065941 main.go:128] libmachine: (calium) Calling .GetSSHKeyPath
I0706 13:37:49.968699 3065941 main.go:128] libmachine: (calium) Calling .GetSSHUsername
I0706 13:37:49.968765 3065941 sshutil.go:53] new ssh client: &{IP:192.168.50.9 Port:22 SSHKeyPath:/home/dcooley/.minikube/machines/calium/id_rsa Username:docker}
I0706 13:37:50.062438 3065941 ssh_runner.go:149] Run: cat /etc/os-release
I0706 13:37:50.067453 3065941 info.go:137] Remote host: Buildroot 2020.02.12
I0706 13:37:50.067476 3065941 filesync.go:126] Scanning /home/dcooley/.minikube/addons for local assets ...
I0706 13:37:50.067559 3065941 filesync.go:126] Scanning /home/dcooley/.minikube/files for local assets ...
I0706 13:37:50.067666 3065941 filesync.go:149] local asset: /home/dcooley/.minikube/files/etc/cni/net.d/87-podman-bridge.conflist -> 87-podman-bridge.conflist in /etc/cni/net.d
W0706 13:37:50.067681 3065941 vm_assets.go:106] NewFileAsset: /home/dcooley/.minikube/files/etc/cni/net.d/87-podman-bridge.conflist is an empty file!
I0706 13:37:50.067721 3065941 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
I0706 13:37:50.075244 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/files/etc/cni/net.d/87-podman-bridge.conflist --> /etc/cni/net.d/87-podman-bridge.conflist (0 bytes)
W0706 13:37:50.075265 3065941 ssh_runner.go:318] 0 byte asset: &{BaseAsset:{SourcePath:/home/dcooley/.minikube/files/etc/cni/net.d/87-podman-bridge.conflist TargetDir:/etc/cni/net.d TargetName:87-podman-bridge.conflist Permissions:0644 Source:} reader:0xc00126d1d0 file:0xc00052c2b8}
W0706 13:37:50.076891 3065941 ssh_runner.go:347] asked to copy a 0 byte asset: &{BaseAsset:{SourcePath:/home/dcooley/.minikube/files/etc/cni/net.d/87-podman-bridge.conflist TargetDir:/etc/cni/net.d TargetName:87-podman-bridge.conflist Permissions:0644 Source:} reader:0xc00126d1d0 file:0xc00052c2b8}
I0706 13:37:50.096503 3065941 start.go:270] post-start completed in 146.300496ms
I0706 13:37:50.096552 3065941 main.go:128] libmachine: (calium) Calling .GetConfigRaw
I0706 13:37:50.097522 3065941 main.go:128] libmachine: (calium) Calling .GetIP
I0706 13:37:50.120782 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:50.121352 3065941 main.go:128] libmachine: (calium) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:db:49", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:37:46 -0700 PDT Type:0 Mac:52:54:00:f9:db:49 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:calium Clientid:01:52:54:00:f9:db:49}
I0706 13:37:50.121381 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined IP address 192.168.50.9 and MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:50.121732 3065941 profile.go:148] Saving config to /home/dcooley/.minikube/profiles/calium/config.json ...
I0706 13:37:50.122205 3065941 start.go:129] duration metric: createHost completed in 12.917512716s
I0706 13:37:50.122223 3065941 start.go:80] releasing machines lock for "calium", held for 12.917621169s
I0706 13:37:50.122280 3065941 main.go:128] libmachine: (calium) Calling .DriverName
I0706 13:37:50.122585 3065941 main.go:128] libmachine: (calium) Calling .GetIP
I0706 13:37:50.144431 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:50.144919 3065941 main.go:128] libmachine: (calium) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:db:49", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:37:46 -0700 PDT Type:0 Mac:52:54:00:f9:db:49 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:calium Clientid:01:52:54:00:f9:db:49}
I0706 13:37:50.144951 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined IP address 192.168.50.9 and MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:50.145228 3065941 main.go:128] libmachine: (calium) Calling .DriverName
I0706 13:37:50.145463 3065941 main.go:128] libmachine: (calium) Calling .DriverName
I0706 13:37:50.146150 3065941 main.go:128] libmachine: (calium) Calling .DriverName
I0706 13:37:50.146441 3065941 ssh_runner.go:149] Run: systemctl --version
I0706 13:37:50.146466 3065941 main.go:128] libmachine: (calium) Calling .GetSSHHostname
I0706 13:37:50.146482 3065941 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0706 13:37:50.146522 3065941 main.go:128] libmachine: (calium) Calling .GetSSHHostname
I0706 13:37:50.174966 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:50.175554 3065941 main.go:128] libmachine: (calium) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:db:49", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:37:46 -0700 PDT Type:0 Mac:52:54:00:f9:db:49 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:calium Clientid:01:52:54:00:f9:db:49}
I0706 13:37:50.175582 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined IP address 192.168.50.9 and MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:50.175846 3065941 main.go:128] libmachine: (calium) Calling .GetSSHPort
I0706 13:37:50.176226 3065941 main.go:128] libmachine: (calium) Calling .GetSSHKeyPath
I0706 13:37:50.176418 3065941 main.go:128] libmachine: (calium) Calling .GetSSHUsername
I0706 13:37:50.176639 3065941 sshutil.go:53] new ssh client: &{IP:192.168.50.9 Port:22 SSHKeyPath:/home/dcooley/.minikube/machines/calium/id_rsa Username:docker}
I0706 13:37:50.182672 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:50.183078 3065941 main.go:128] libmachine: (calium) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:db:49", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:37:46 -0700 PDT Type:0 Mac:52:54:00:f9:db:49 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:calium Clientid:01:52:54:00:f9:db:49}
I0706 13:37:50.183107 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined IP address 192.168.50.9 and MAC address 52:54:00:f9:db:49 in network calium
I0706 13:37:50.183367 3065941 main.go:128] libmachine: (calium) Calling .GetSSHPort
I0706 13:37:50.183593 3065941 main.go:128] libmachine: (calium) Calling .GetSSHKeyPath
I0706 13:37:50.183750 3065941 main.go:128] libmachine: (calium) Calling .GetSSHUsername
I0706 13:37:50.183880 3065941 sshutil.go:53] new ssh client: &{IP:192.168.50.9 Port:22 SSHKeyPath:/home/dcooley/.minikube/machines/calium/id_rsa Username:docker}
I0706 13:37:50.440537 3065941 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime containerd
I0706 13:37:50.440658 3065941 ssh_runner.go:149] Run: sudo crictl images --output json
I0706 13:37:54.458233 3065941 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.017546514s)
I0706 13:37:54.458382 3065941 containerd.go:573] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.20.7". assuming images are not preloaded.
I0706 13:37:54.458443 3065941 ssh_runner.go:149] Run: which lz4
I0706 13:37:54.460483 3065941 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0706 13:37:54.462892 3065941 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/preloaded.tar.lz4': No such file or directory
I0706 13:37:54.462920 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (954876448 bytes)
I0706 13:37:55.979719 3065941 containerd.go:510] Took 1.519284 seconds to copy over tarball
I0706 13:37:55.979771 3065941 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0706 13:37:58.966285 3065941 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.986457709s)
I0706 13:37:58.966308 3065941 containerd.go:517] Took 2.986573 seconds t extract the tarball
I0706 13:37:58.966320 3065941 ssh_runner.go:100] rm: /preloaded.tar.lz4
I0706 13:37:59.011099 3065941 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0706 13:37:59.125008 3065941 ssh_runner.go:149] Run: sudo systemctl restart containerd
I0706 13:37:59.170745 3065941 ssh_runner.go:149] Run: sudo systemctl stop -f crio
I0706 13:37:59.187734 3065941 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
I0706 13:37:59.193787 3065941 docker.go:153] disabling docker service ...
I0706 13:37:59.193859 3065941 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
I0706 13:37:59.200265 3065941 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
E0706 13:37:59.206270 3065941 docker.go:159] "Failed to stop" err="sudo systemctl stop -f docker.service: Process exited with status 5\nstdout:\n\nstderr:\nFailed to stop docker.service: Unit docker.service not loaded.\n" service="docker.service"
W0706 13:37:59.206301 3065941 cruntime.go:236] disable failed: sudo systemctl stop -f docker.service: Process exited with status 5
stdout:
stderr:
Failed to stop docker.service: Unit docker.service not loaded.
I0706 13:37:59.206355 3065941 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
I0706 13:37:59.214594 3065941 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0706 13:37:59.221771 3065941 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKCltncnBjXQogIGFkZHJlc3MgPSAiL3J1bi9jb250YWluZXJkL2NvbnRhaW5lcmQuc29jayIKICB1aWQgPSAwCiAgZ2lkID0gMAogIG1heF9yZWN2X21lc3NhZ2Vfc2l6ZSA9IDE2Nzc3MjE2CiAgbWF4X3NlbmRfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKCltkZWJ1Z10KICBhZGRyZXNzID0gIiIKICB1aWQgPSAwCiAgZ2lkID0gMAogIGxldmVsID0gIiIKClttZXRyaWNzXQogIGFkZHJlc3MgPSAiIgogIGdycGNfaGlzdG9ncmFtID0gZmFsc2UKCltjZ3JvdXBdCiAgcGF0aCA9ICIiCgpbcGx1Z2luc10KICBbcGx1Z2lucy5jZ3JvdXBzXQogICAgbm9fcHJvbWV0aGV1cyA9IGZhbHNlCiAgW3BsdWdpbnMuY3JpXQogICAgc3RyZWFtX3NlcnZlcl9hZGRyZXNzID0gIiIKICAgIHN0cmVhbV9zZXJ2ZXJfcG9ydCA9ICIxMDAxMCIKICAgIGVuYWJsZV9zZWxpbnV4ID0gZmFsc2UKICAgIHNhbmRib3hfaW1hZ2UgPSAiazhzLmdjci5pby9wYXVzZTozLjIiCiAgICBzdGF0c19jb2xsZWN0X3BlcmlvZCA9IDEwCiAgICBzeXN0ZW1kX2Nncm91cCA9IGZhbHNlCiAgICBlbmFibGVfdGxzX3N0cmVhbWluZyA9IGZhbHNlCiAgICBtYXhfY29udGFpbmVyX2xvZ19saW5lX3NpemUgPSAxNjM4NAogICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmRdCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgbm9fcGl2b3QgPSB0cnVlCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW50aW1lLnYxLmxpbnV4IgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMubGludXhdCiAgICBzaGltID0gImNvbnRhaW5lcmQtc2hpbSIKICAgIHJ1bnRpbWUgPSAicnVuYyIKICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICBub19zaGltID0gZmFsc2UKICAgIHNoaW1fZGVidWcgPSBmYWxzZQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
I0706 13:37:59.233569 3065941 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0706 13:37:59.239167 3065941 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0706 13:37:59.239279 3065941 ssh_runner.go:149] Run: sudo modprobe br_netfilter
I0706 13:37:59.249587 3065941 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0706 13:37:59.254159 3065941 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0706 13:37:59.330268 3065941 ssh_runner.go:149] Run: sudo systemctl restart containerd
I0706 13:37:59.349347 3065941 start.go:381] Will wait 60s for socket path /run/containerd/containerd.sock
I0706 13:37:59.349399 3065941 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
I0706 13:37:59.351792 3065941 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
I0706 13:38:00.457128 3065941 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
I0706 13:38:00.463108 3065941 start.go:406] Will wait 60s for crictl version
I0706 13:38:00.463198 3065941 ssh_runner.go:149] Run: sudo crictl version
I0706 13:38:00.481466 3065941 start.go:415] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.4.4
RuntimeApiVersion: v1alpha2
I0706 13:38:00.481548 3065941 ssh_runner.go:149] Run: containerd --version
I0706 13:38:00.518827 3065941 out.go:170] 📦 Preparing Kubernetes v1.20.7 on containerd 1.4.4 ...
I0706 13:38:00.518908 3065941 main.go:128] libmachine: (calium) Calling .GetIP
I0706 13:38:00.544135 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:38:00.544739 3065941 main.go:128] libmachine: (calium) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:db:49", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:37:46 -0700 PDT Type:0 Mac:52:54:00:f9:db:49 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:calium Clientid:01:52:54:00:f9:db:49}
I0706 13:38:00.544778 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined IP address 192.168.50.9 and MAC address 52:54:00:f9:db:49 in network calium
I0706 13:38:00.545215 3065941 ssh_runner.go:149] Run: grep 192.168.50.1 host.minikube.internal$ /etc/hosts
I0706 13:38:00.549509 3065941 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0706 13:38:00.560715 3065941 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime containerd
I0706 13:38:00.560773 3065941 ssh_runner.go:149] Run: sudo crictl images --output json
I0706 13:38:00.582528 3065941 containerd.go:577] all images are preloaded for containerd runtime.
I0706 13:38:00.582541 3065941 containerd.go:481] Images already preloaded, skipping extraction
I0706 13:38:00.582585 3065941 ssh_runner.go:149] Run: sudo crictl images --output json
I0706 13:38:00.603906 3065941 containerd.go:577] all images are preloaded for containerd runtime.
I0706 13:38:00.603921 3065941 cache_images.go:74] Images are preloaded, skipping loading
I0706 13:38:00.603972 3065941 ssh_runner.go:149] Run: sudo crictl info
I0706 13:38:00.626756 3065941 cni.go:93] Creating CNI manager for "calico"
I0706 13:38:00.626772 3065941 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0706 13:38:00.626789 3065941 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.9 APIServerPort:8443 KubernetesVersion:v1.20.7 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calium NodeName:calium DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.9"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.50.9 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0706 13:38:00.626950 3065941 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.50.9
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "calium"
kubeletExtraArgs:
node-ip: 192.168.50.9
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.50.9"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.7
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
I0706 13:38:00.627062 3065941 kubeadm.go:909] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.7/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=calium --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.9 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.20.7 ClusterName:calium Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
I0706 13:38:00.627126 3065941 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.7
I0706 13:38:00.635690 3065941 binaries.go:44] Found k8s binaries, skipping transfer
I0706 13:38:00.635751 3065941 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0706 13:38:00.644483 3065941 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (520 bytes)
I0706 13:38:00.657082 3065941 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0706 13:38:00.669555 3065941 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1872 bytes)
I0706 13:38:00.683748 3065941 ssh_runner.go:149] Run: grep 192.168.50.9 control-plane.minikube.internal$ /etc/hosts
I0706 13:38:00.687237 3065941 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.9 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0706 13:38:00.694763 3065941 certs.go:52] Setting up /home/dcooley/.minikube/profiles/calium for IP: 192.168.50.9
I0706 13:38:00.694834 3065941 certs.go:179] skipping minikubeCA CA generation: /home/dcooley/.minikube/ca.key
I0706 13:38:00.694852 3065941 certs.go:179] skipping proxyClientCA CA generation: /home/dcooley/.minikube/proxy-client-ca.key
I0706 13:38:00.694917 3065941 certs.go:294] generating minikube-user signed cert: /home/dcooley/.minikube/profiles/calium/client.key
I0706 13:38:00.694940 3065941 crypto.go:69] Generating cert /home/dcooley/.minikube/profiles/calium/client.crt with IP's: []
I0706 13:38:00.974191 3065941 crypto.go:157] Writing cert to /home/dcooley/.minikube/profiles/calium/client.crt ...
I0706 13:38:00.974206 3065941 lock.go:36] WriteFile acquiring /home/dcooley/.minikube/profiles/calium/client.crt: {Name:mka7150ed02407df6ad80b8e7ca47530e7a7e2b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0706 13:38:00.974364 3065941 crypto.go:165] Writing key to /home/dcooley/.minikube/profiles/calium/client.key ...
I0706 13:38:00.974375 3065941 lock.go:36] WriteFile acquiring /home/dcooley/.minikube/profiles/calium/client.key: {Name:mk8140cfd55ab220cd8453d3d03159124f5cecde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0706 13:38:00.974435 3065941 certs.go:294] generating minikube signed cert: /home/dcooley/.minikube/profiles/calium/apiserver.key.40f175f0
I0706 13:38:00.974438 3065941 crypto.go:69] Generating cert /home/dcooley/.minikube/profiles/calium/apiserver.crt.40f175f0 with IP's: [192.168.50.9 10.96.0.1 127.0.0.1 10.0.0.1]
I0706 13:38:01.022556 3065941 crypto.go:157] Writing cert to /home/dcooley/.minikube/profiles/calium/apiserver.crt.40f175f0 ...
I0706 13:38:01.022568 3065941 lock.go:36] WriteFile acquiring /home/dcooley/.minikube/profiles/calium/apiserver.crt.40f175f0: {Name:mk3c2f4215a691a5d2baeefa6017a3f17b3f4058 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0706 13:38:01.022713 3065941 crypto.go:165] Writing key to /home/dcooley/.minikube/profiles/calium/apiserver.key.40f175f0 ...
I0706 13:38:01.022723 3065941 lock.go:36] WriteFile acquiring /home/dcooley/.minikube/profiles/calium/apiserver.key.40f175f0: {Name:mkc4aaeb011a61ce4bb8db0d23a890a9b518589a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0706 13:38:01.022778 3065941 certs.go:305] copying /home/dcooley/.minikube/profiles/calium/apiserver.crt.40f175f0 -> /home/dcooley/.minikube/profiles/calium/apiserver.crt
I0706 13:38:01.022826 3065941 certs.go:309] copying /home/dcooley/.minikube/profiles/calium/apiserver.key.40f175f0 -> /home/dcooley/.minikube/profiles/calium/apiserver.key
I0706 13:38:01.022859 3065941 certs.go:294] generating aggregator signed cert: /home/dcooley/.minikube/profiles/calium/proxy-client.key
I0706 13:38:01.022862 3065941 crypto.go:69] Generating cert /home/dcooley/.minikube/profiles/calium/proxy-client.crt with IP's: []
I0706 13:38:01.072244 3065941 crypto.go:157] Writing cert to /home/dcooley/.minikube/profiles/calium/proxy-client.crt ...
I0706 13:38:01.072256 3065941 lock.go:36] WriteFile acquiring /home/dcooley/.minikube/profiles/calium/proxy-client.crt: {Name:mk6422f5de85ff2905e897553c66f01d0f269e15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0706 13:38:01.072499 3065941 crypto.go:165] Writing key to /home/dcooley/.minikube/profiles/calium/proxy-client.key ...
I0706 13:38:01.072510 3065941 lock.go:36] WriteFile acquiring /home/dcooley/.minikube/profiles/calium/proxy-client.key: {Name:mke8b7570607becf1ea716a23ca07547c8e2e38a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0706 13:38:01.072653 3065941 certs.go:369] found cert: /home/dcooley/.minikube/certs/home/dcooley/.minikube/certs/ca-key.pem (1675 bytes)
I0706 13:38:01.072680 3065941 certs.go:369] found cert: /home/dcooley/.minikube/certs/home/dcooley/.minikube/certs/ca.pem (1078 bytes)
I0706 13:38:01.072693 3065941 certs.go:369] found cert: /home/dcooley/.minikube/certs/home/dcooley/.minikube/certs/cert.pem (1123 bytes)
I0706 13:38:01.072703 3065941 certs.go:369] found cert: /home/dcooley/.minikube/certs/home/dcooley/.minikube/certs/key.pem (1675 bytes)
I0706 13:38:01.073381 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/profiles/calium/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0706 13:38:01.096285 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/profiles/calium/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0706 13:38:01.115312 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/profiles/calium/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0706 13:38:01.135233 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/profiles/calium/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0706 13:38:01.154193 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0706 13:38:01.174000 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0706 13:38:01.193266 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0706 13:38:01.210880 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0706 13:38:01.224480 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0706 13:38:01.243680 3065941 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0706 13:38:01.256379 3065941 ssh_runner.go:149] Run: openssl version
I0706 13:38:01.263038 3065941 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0706 13:38:01.271502 3065941 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0706 13:38:01.276330 3065941 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 May 27 16:26 /usr/share/ca-certificates/minikubeCA.pem
I0706 13:38:01.276413 3065941 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0706 13:38:01.283626 3065941 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0706 13:38:01.292086 3065941 kubeadm.go:390] StartCluster: {Name:calium KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.21.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:2560 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:calium Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.9 Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:calium MultiNodeRequested:true}
I0706 13:38:01.292264 3065941 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0706 13:38:01.292333 3065941 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0706 13:38:01.314553 3065941 cri.go:76] found id: ""
I0706 13:38:01.314643 3065941 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0706 13:38:01.324225 3065941 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0706 13:38:01.332484 3065941 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0706 13:38:01.341360 3065941 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0706 13:38:01.341398 3065941 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
I0706 13:38:24.936925 3065941 out.go:197] ▪ Generating certificates and keys ...
I0706 13:38:24.938782 3065941 out.go:197] ▪ Booting up control plane ...
I0706 13:38:24.940215 3065941 out.go:197] ▪ Configuring RBAC rules ...
I0706 13:38:24.941688 3065941 cni.go:93] Creating CNI manager for "calico"
I0706 13:38:24.942501 3065941 out.go:170] 🔗 Configuring Calico (Container Networking Interface) ...
I0706 13:38:24.942559 3065941 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.20.7/kubectl ...
I0706 13:38:24.942565 3065941 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (22544 bytes)
I0706 13:38:24.952177 3065941 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0706 13:38:26.301425 3065941 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.7/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.349223469s)
I0706 13:38:26.301452 3065941 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0706 13:38:26.301565 3065941 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl label nodes minikube.k8s.io/version=v1.21.0 minikube.k8s.io/commit=76d74191d82c47883dc7e1319ef7cebd3e00ee11-dirty minikube.k8s.io/name=calium minikube.k8s.io/updated_at=2021_07_06T13_38_26_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0706 13:38:26.301604 3065941 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0706 13:38:26.326058 3065941 ops.go:34] apiserver oom_adj: -16
I0706 13:38:26.433384 3065941 kubeadm.go:985] duration metric: took 131.876375ms to wait for elevateKubeSystemPrivileges.
I0706 13:38:26.433585 3065941 kubeadm.go:392] StartCluster complete in 25.141507937s
I0706 13:38:26.433620 3065941 settings.go:142] acquiring lock: {Name:mkcf04f7400c8d286fb3f2fbdc94b368cc7eb601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0706 13:38:26.433740 3065941 settings.go:150] Updating kubeconfig: /home/dcooley/.kube/config
I0706 13:38:26.435112 3065941 lock.go:36] WriteFile acquiring /home/dcooley/.kube/config: {Name:mk1107463a04366ba250b4b0b378251c196f2c30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0706 13:38:26.960169 3065941 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calium" rescaled to 1
I0706 13:38:26.960198 3065941 start.go:214] Will wait 6m0s for node &{Name: IP:192.168.50.9 Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}
I0706 13:38:26.961179 3065941 out.go:170] 🔎 Verifying Kubernetes components...
I0706 13:38:26.961234 3065941 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0706 13:38:26.960269 3065941 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0706 13:38:26.960281 3065941 addons.go:342] enableAddons start: toEnable=map[], additional=[]
I0706 13:38:26.961315 3065941 addons.go:59] Setting storage-provisioner=true in profile "calium"
I0706 13:38:26.961339 3065941 addons.go:59] Setting default-storageclass=true in profile "calium"
I0706 13:38:26.961365 3065941 addons.go:135] Setting addon storage-provisioner=true in "calium"
I0706 13:38:26.961369 3065941 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calium"
W0706 13:38:26.961373 3065941 addons.go:147] addon storage-provisioner should already be in state true
I0706 13:38:26.961395 3065941 host.go:66] Checking if "calium" exists ...
I0706 13:38:26.962582 3065941 main.go:128] libmachine: Found binary path at /home/dcooley/.minikube/bin/docker-machine-driver-kvm2
I0706 13:38:26.962663 3065941 main.go:128] libmachine: Launching plugin server for driver kvm2
I0706 13:38:26.962872 3065941 main.go:128] libmachine: Found binary path at /home/dcooley/.minikube/bin/docker-machine-driver-kvm2
I0706 13:38:26.962976 3065941 main.go:128] libmachine: Launching plugin server for driver kvm2
I0706 13:38:26.972916 3065941 api_server.go:50] waiting for apiserver process to appear ...
I0706 13:38:26.972972 3065941 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0706 13:38:26.987708 3065941 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:42423
I0706 13:38:26.988245 3065941 main.go:128] libmachine: () Calling .GetVersion
I0706 13:38:26.988802 3065941 main.go:128] libmachine: Using API Version 1
I0706 13:38:26.988820 3065941 main.go:128] libmachine: () Calling .SetConfigRaw
I0706 13:38:26.989099 3065941 main.go:128] libmachine: () Calling .GetMachineName
I0706 13:38:26.989712 3065941 main.go:128] libmachine: Found binary path at /home/dcooley/.minikube/bin/docker-machine-driver-kvm2
I0706 13:38:26.989763 3065941 main.go:128] libmachine: Launching plugin server for driver kvm2
I0706 13:38:26.990970 3065941 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:42013
I0706 13:38:26.991344 3065941 main.go:128] libmachine: () Calling .GetVersion
I0706 13:38:26.991931 3065941 main.go:128] libmachine: Using API Version 1
I0706 13:38:26.991952 3065941 main.go:128] libmachine: () Calling .SetConfigRaw
I0706 13:38:26.992441 3065941 main.go:128] libmachine: () Calling .GetMachineName
I0706 13:38:26.992577 3065941 main.go:128] libmachine: (calium) Calling .GetState
I0706 13:38:27.002677 3065941 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:44705
I0706 13:38:27.002989 3065941 main.go:128] libmachine: () Calling .GetVersion
I0706 13:38:27.003321 3065941 main.go:128] libmachine: Using API Version 1
I0706 13:38:27.003334 3065941 main.go:128] libmachine: () Calling .SetConfigRaw
I0706 13:38:27.003686 3065941 main.go:128] libmachine: () Calling .GetMachineName
I0706 13:38:27.003823 3065941 main.go:128] libmachine: (calium) Calling .GetState
I0706 13:38:27.006363 3065941 addons.go:135] Setting addon default-storageclass=true in "calium"
W0706 13:38:27.006371 3065941 addons.go:147] addon default-storageclass should already be in state true
I0706 13:38:27.006389 3065941 host.go:66] Checking if "calium" exists ...
I0706 13:38:27.006595 3065941 main.go:128] libmachine: Found binary path at /home/dcooley/.minikube/bin/docker-machine-driver-kvm2
I0706 13:38:27.006623 3065941 main.go:128] libmachine: Launching plugin server for driver kvm2
I0706 13:38:27.011611 3065941 main.go:128] libmachine: (calium) Calling .DriverName
I0706 13:38:27.012822 3065941 out.go:170] ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0706 13:38:27.012893 3065941 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0706 13:38:27.012899 3065941 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0706 13:38:27.012920 3065941 main.go:128] libmachine: (calium) Calling .GetSSHHostname
I0706 13:38:27.018350 3065941 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:45749
I0706 13:38:27.018718 3065941 main.go:128] libmachine: () Calling .GetVersion
I0706 13:38:27.019113 3065941 main.go:128] libmachine: Using API Version 1
I0706 13:38:27.019126 3065941 main.go:128] libmachine: () Calling .SetConfigRaw
I0706 13:38:27.019393 3065941 main.go:128] libmachine: () Calling .GetMachineName
I0706 13:38:27.019792 3065941 main.go:128] libmachine: Found binary path at /home/dcooley/.minikube/bin/docker-machine-driver-kvm2
I0706 13:38:27.019819 3065941 main.go:128] libmachine: Launching plugin server for driver kvm2
I0706 13:38:27.028507 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:38:27.028904 3065941 main.go:128] libmachine: (calium) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:db:49", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:37:46 -0700 PDT Type:0 Mac:52:54:00:f9:db:49 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:calium Clientid:01:52:54:00:f9:db:49}
I0706 13:38:27.028929 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined IP address 192.168.50.9 and MAC address 52:54:00:f9:db:49 in network calium
I0706 13:38:27.029031 3065941 main.go:128] libmachine: (calium) Calling .GetSSHPort
I0706 13:38:27.029210 3065941 main.go:128] libmachine: (calium) Calling .GetSSHKeyPath
I0706 13:38:27.029302 3065941 main.go:128] libmachine: (calium) Calling .GetSSHUsername
I0706 13:38:27.029379 3065941 sshutil.go:53] new ssh client: &{IP:192.168.50.9 Port:22 SSHKeyPath:/home/dcooley/.minikube/machines/calium/id_rsa Username:docker}
I0706 13:38:27.032372 3065941 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:36337
I0706 13:38:27.032629 3065941 main.go:128] libmachine: () Calling .GetVersion
I0706 13:38:27.032923 3065941 main.go:128] libmachine: Using API Version 1
I0706 13:38:27.032931 3065941 main.go:128] libmachine: () Calling .SetConfigRaw
I0706 13:38:27.033237 3065941 main.go:128] libmachine: () Calling .GetMachineName
I0706 13:38:27.033306 3065941 main.go:128] libmachine: (calium) Calling .GetState
I0706 13:38:27.038283 3065941 main.go:128] libmachine: (calium) Calling .DriverName
I0706 13:38:27.038446 3065941 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
I0706 13:38:27.038452 3065941 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0706 13:38:27.038461 3065941 main.go:128] libmachine: (calium) Calling .GetSSHHostname
I0706 13:38:27.047830 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:38:27.048172 3065941 main.go:128] libmachine: (calium) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:db:49", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:37:46 -0700 PDT Type:0 Mac:52:54:00:f9:db:49 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:calium Clientid:01:52:54:00:f9:db:49}
I0706 13:38:27.048188 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined IP address 192.168.50.9 and MAC address 52:54:00:f9:db:49 in network calium
I0706 13:38:27.048373 3065941 main.go:128] libmachine: (calium) Calling .GetSSHPort
I0706 13:38:27.048523 3065941 main.go:128] libmachine: (calium) Calling .GetSSHKeyPath
I0706 13:38:27.048584 3065941 main.go:128] libmachine: (calium) Calling .GetSSHUsername
I0706 13:38:27.048649 3065941 sshutil.go:53] new ssh client: &{IP:192.168.50.9 Port:22 SSHKeyPath:/home/dcooley/.minikube/machines/calium/id_rsa Username:docker}
I0706 13:38:27.054648 3065941 api_server.go:70] duration metric: took 94.430274ms to wait for apiserver process to appear ...
I0706 13:38:27.054661 3065941 api_server.go:86] waiting for apiserver healthz status ...
I0706 13:38:27.054672 3065941 api_server.go:223] Checking apiserver healthz at https://192.168.50.9:8443/healthz ...
I0706 13:38:27.054772 3065941 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.50.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0706 13:38:27.062923 3065941 api_server.go:249] https://192.168.50.9:8443/healthz returned 200:
ok
I0706 13:38:27.064046 3065941 api_server.go:139] control plane version: v1.20.7
I0706 13:38:27.064061 3065941 api_server.go:129] duration metric: took 9.39562ms to wait for apiserver health ...
I0706 13:38:27.064066 3065941 system_pods.go:43] waiting for kube-system pods to appear ...
I0706 13:38:27.069850 3065941 system_pods.go:59] 0 kube-system pods found
I0706 13:38:27.069859 3065941 retry.go:31] will retry after 305.063636ms: only 0 pod(s) have shown up
I0706 13:38:27.153030 3065941 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.7/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0706 13:38:27.158460 3065941 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.7/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0706 13:38:27.391693 3065941 system_pods.go:59] 0 kube-system pods found
I0706 13:38:27.391707 3065941 retry.go:31] will retry after 338.212508ms: only 0 pod(s) have shown up
I0706 13:38:27.487932 3065941 start.go:725] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS
I0706 13:38:27.602485 3065941 main.go:128] libmachine: Making call to close driver server
I0706 13:38:27.602490 3065941 main.go:128] libmachine: Making call to close driver server
I0706 13:38:27.602501 3065941 main.go:128] libmachine: (calium) Calling .Close
I0706 13:38:27.602505 3065941 main.go:128] libmachine: (calium) Calling .Close
I0706 13:38:27.602833 3065941 main.go:128] libmachine: (calium) DBG | Closing plugin on server side
I0706 13:38:27.602838 3065941 main.go:128] libmachine: Successfully made call to close driver server
I0706 13:38:27.602848 3065941 main.go:128] libmachine: (calium) DBG | Closing plugin on server side
I0706 13:38:27.602851 3065941 main.go:128] libmachine: Making call to close connection to plugin binary
I0706 13:38:27.602855 3065941 main.go:128] libmachine: Successfully made call to close driver server
I0706 13:38:27.602861 3065941 main.go:128] libmachine: Making call to close driver server
I0706 13:38:27.602869 3065941 main.go:128] libmachine: Making call to close connection to plugin binary
I0706 13:38:27.602871 3065941 main.go:128] libmachine: (calium) Calling .Close
I0706 13:38:27.602878 3065941 main.go:128] libmachine: Making call to close driver server
I0706 13:38:27.602889 3065941 main.go:128] libmachine: (calium) Calling .Close
I0706 13:38:27.603136 3065941 main.go:128] libmachine: Successfully made call to close driver server
I0706 13:38:27.603141 3065941 main.go:128] libmachine: (calium) DBG | Closing plugin on server side
I0706 13:38:27.603159 3065941 main.go:128] libmachine: Making call to close connection to plugin binary
I0706 13:38:27.603164 3065941 main.go:128] libmachine: Successfully made call to close driver server
I0706 13:38:27.603173 3065941 main.go:128] libmachine: Making call to close connection to plugin binary
I0706 13:38:27.603174 3065941 main.go:128] libmachine: Making call to close driver server
I0706 13:38:27.603172 3065941 main.go:128] libmachine: (calium) DBG | Closing plugin on server side
I0706 13:38:27.603185 3065941 main.go:128] libmachine: (calium) Calling .Close
I0706 13:38:27.603392 3065941 main.go:128] libmachine: Successfully made call to close driver server
I0706 13:38:27.603404 3065941 main.go:128] libmachine: Making call to close connection to plugin binary
I0706 13:38:27.603421 3065941 main.go:128] libmachine: (calium) DBG | Closing plugin on server side
I0706 13:38:27.604800 3065941 out.go:170] 🌟 Enabled addons: storage-provisioner, default-storageclass
I0706 13:38:27.604832 3065941 addons.go:344] enableAddons completed in 644.552035ms
I0706 13:38:27.740769 3065941 system_pods.go:59] 1 kube-system pods found
I0706 13:38:27.740793 3065941 system_pods.go:61] "storage-provisioner" [2122996e-d352-460c-8c17-53df4e3d7b9f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0706 13:38:27.740802 3065941 retry.go:31] will retry after 378.459802ms: only 1 pod(s) have shown up
I0706 13:38:28.125569 3065941 system_pods.go:59] 1 kube-system pods found
I0706 13:38:28.125595 3065941 system_pods.go:61] "storage-provisioner" [2122996e-d352-460c-8c17-53df4e3d7b9f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0706 13:38:28.125608 3065941 retry.go:31] will retry after 469.882201ms: only 1 pod(s) have shown up
I0706 13:38:28.601963 3065941 system_pods.go:59] 1 kube-system pods found
I0706 13:38:28.601991 3065941 system_pods.go:61] "storage-provisioner" [2122996e-d352-460c-8c17-53df4e3d7b9f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0706 13:38:28.602004 3065941 retry.go:31] will retry after 667.365439ms: only 1 pod(s) have shown up
I0706 13:38:29.274752 3065941 system_pods.go:59] 1 kube-system pods found
I0706 13:38:29.274775 3065941 system_pods.go:61] "storage-provisioner" [2122996e-d352-460c-8c17-53df4e3d7b9f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0706 13:38:29.274787 3065941 retry.go:31] will retry after 597.243124ms: only 1 pod(s) have shown up
I0706 13:38:29.876675 3065941 system_pods.go:59] 1 kube-system pods found
I0706 13:38:29.876699 3065941 system_pods.go:61] "storage-provisioner" [2122996e-d352-460c-8c17-53df4e3d7b9f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0706 13:38:29.876710 3065941 retry.go:31] will retry after 789.889932ms: only 1 pod(s) have shown up
I0706 13:38:30.673129 3065941 system_pods.go:59] 1 kube-system pods found
I0706 13:38:30.673155 3065941 system_pods.go:61] "storage-provisioner" [2122996e-d352-460c-8c17-53df4e3d7b9f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0706 13:38:30.673167 3065941 retry.go:31] will retry after 951.868007ms: only 1 pod(s) have shown up
I0706 13:38:31.642347 3065941 system_pods.go:59] 1 kube-system pods found
I0706 13:38:31.642365 3065941 system_pods.go:61] "storage-provisioner" [2122996e-d352-460c-8c17-53df4e3d7b9f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0706 13:38:31.642374 3065941 retry.go:31] will retry after 1.341783893s: only 1 pod(s) have shown up
I0706 13:38:32.990981 3065941 system_pods.go:59] 1 kube-system pods found
I0706 13:38:32.991004 3065941 system_pods.go:61] "storage-provisioner" [2122996e-d352-460c-8c17-53df4e3d7b9f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0706 13:38:32.991016 3065941 retry.go:31] will retry after 1.876813009s: only 1 pod(s) have shown up
I0706 13:38:34.873548 3065941 system_pods.go:59] 1 kube-system pods found
I0706 13:38:34.873574 3065941 system_pods.go:61] "storage-provisioner" [2122996e-d352-460c-8c17-53df4e3d7b9f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0706 13:38:34.873585 3065941 retry.go:31] will retry after 2.6934314s: only 1 pod(s) have shown up
I0706 13:38:37.573338 3065941 system_pods.go:59] 5 kube-system pods found
I0706 13:38:37.573358 3065941 system_pods.go:61] "etcd-calium" [fecf0dd1-caa4-4deb-90b4-e45d6a14b943] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0706 13:38:37.573369 3065941 system_pods.go:61] "kube-apiserver-calium" [43a91d61-6269-4c1c-bcc5-1d719d814a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0706 13:38:37.573384 3065941 system_pods.go:61] "kube-controller-manager-calium" [dfcea3e0-3d70-4a48-ae6e-a1f18d01f7b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0706 13:38:37.573390 3065941 system_pods.go:61] "kube-scheduler-calium" [ec7be3bb-a576-4a3e-9b68-09c9b58550ee] Pending
I0706 13:38:37.573396 3065941 system_pods.go:61] "storage-provisioner" [2122996e-d352-460c-8c17-53df4e3d7b9f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0706 13:38:37.573403 3065941 system_pods.go:74] duration metric: took 10.509332508s to wait for pod list to return data ...
I0706 13:38:37.573411 3065941 kubeadm.go:547] duration metric: took 10.613198558s to wait for : map[apiserver:true system_pods:true] ...
I0706 13:38:37.573424 3065941 node_conditions.go:102] verifying NodePressure condition ...
I0706 13:38:37.578545 3065941 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0706 13:38:37.578589 3065941 node_conditions.go:123] node cpu capacity is 2
I0706 13:38:37.578604 3065941 node_conditions.go:105] duration metric: took 5.175687ms to run NodePressure ...
I0706 13:38:37.578617 3065941 start.go:219] waiting for startup goroutines ...
I0706 13:38:37.580182 3065941 out.go:170]
I0706 13:38:37.580785 3065941 profile.go:148] Saving config to /home/dcooley/.minikube/profiles/calium/config.json ...
I0706 13:38:37.582331 3065941 out.go:170] 👍 Starting node calium-m02 in cluster calium
I0706 13:38:37.582380 3065941 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime containerd
I0706 13:38:37.582403 3065941 cache.go:54] Caching tarball of preloaded images
I0706 13:38:37.582597 3065941 preload.go:166] Found /home/dcooley/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0706 13:38:37.582629 3065941 cache.go:57] Finished verifying existence of preloaded tar for v1.20.7 on containerd
I0706 13:38:37.582763 3065941 profile.go:148] Saving config to /home/dcooley/.minikube/profiles/calium/config.json ...
I0706 13:38:37.583029 3065941 cache.go:202] Successfully downloaded all kic artifacts
I0706 13:38:37.583056 3065941 start.go:313] acquiring machines lock for calium-m02: {Name:mk713ea1bc47ea454143c3d059dd7f13e11b4c0a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0706 13:38:37.583123 3065941 start.go:317] acquired machines lock for "calium-m02" in 49.413µs
I0706 13:38:37.583139 3065941 start.go:89] Provisioning new machine with config: &{Name:calium KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.21.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:2560 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:calium Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.9 Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.20.7 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:calium MultiNodeRequested:true} &{Name:m02 IP: Port:0 KubernetesVersion:v1.20.7 ControlPlane:false Worker:true}
I0706 13:38:37.583226 3065941 start.go:126] createHost starting for "m02" (driver="kvm2")
I0706 13:38:37.584904 3065941 out.go:197] 🔥 Creating kvm2 VM (CPUs=2, Memory=2560MB, Disk=20000MB) ...
I0706 13:38:37.585061 3065941 main.go:128] libmachine: Found binary path at /home/dcooley/.minikube/bin/docker-machine-driver-kvm2
I0706 13:38:37.585107 3065941 main.go:128] libmachine: Launching plugin server for driver kvm2
I0706 13:38:37.611535 3065941 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:33069
I0706 13:38:37.612167 3065941 main.go:128] libmachine: () Calling .GetVersion
I0706 13:38:37.613176 3065941 main.go:128] libmachine: Using API Version 1
I0706 13:38:37.613211 3065941 main.go:128] libmachine: () Calling .SetConfigRaw
I0706 13:38:37.613741 3065941 main.go:128] libmachine: () Calling .GetMachineName
I0706 13:38:37.613960 3065941 main.go:128] libmachine: (calium-m02) Calling .GetMachineName
I0706 13:38:37.614142 3065941 main.go:128] libmachine: (calium-m02) Calling .DriverName
I0706 13:38:37.614255 3065941 start.go:160] libmachine.API.Create for "calium" (driver="kvm2")
I0706 13:38:37.614279 3065941 client.go:168] LocalClient.Create starting
I0706 13:38:37.614309 3065941 main.go:128] libmachine: Reading certificate data from /home/dcooley/.minikube/certs/ca.pem
I0706 13:38:37.614342 3065941 main.go:128] libmachine: Decoding PEM data...
I0706 13:38:37.614358 3065941 main.go:128] libmachine: Parsing certificate...
I0706 13:38:37.614490 3065941 main.go:128] libmachine: Reading certificate data from /home/dcooley/.minikube/certs/cert.pem
I0706 13:38:37.614505 3065941 main.go:128] libmachine: Decoding PEM data...
I0706 13:38:37.614521 3065941 main.go:128] libmachine: Parsing certificate...
I0706 13:38:37.614578 3065941 main.go:128] libmachine: Running pre-create checks...
I0706 13:38:37.614586 3065941 main.go:128] libmachine: (calium-m02) Calling .PreCreateCheck
I0706 13:38:37.614758 3065941 main.go:128] libmachine: (calium-m02) Calling .GetConfigRaw
I0706 13:38:37.615151 3065941 main.go:128] libmachine: Creating machine...
I0706 13:38:37.615164 3065941 main.go:128] libmachine: (calium-m02) Calling .Create
I0706 13:38:37.615285 3065941 main.go:128] libmachine: (calium-m02) Creating KVM machine...
I0706 13:38:37.623366 3065941 main.go:128] libmachine: (calium-m02) DBG | found existing default KVM network
I0706 13:38:37.623452 3065941 main.go:128] libmachine: (calium-m02) DBG | found existing private KVM network calium
I0706 13:38:37.623600 3065941 main.go:128] libmachine: (calium-m02) Setting up store path in /home/dcooley/.minikube/machines/calium-m02 ...
I0706 13:38:37.623620 3065941 main.go:128] libmachine: (calium-m02) Building disk image from file:///home/dcooley/.minikube/cache/iso/minikube-v1.21.0.iso
I0706 13:38:37.623705 3065941 main.go:128] libmachine: (calium-m02) DBG | I0706 13:38:37.623599 3067188 common.go:101] Making disk image using store path: /home/dcooley/.minikube
I0706 13:38:37.623815 3065941 main.go:128] libmachine: (calium-m02) Downloading /home/dcooley/.minikube/cache/boot2docker.iso from file:///home/dcooley/.minikube/cache/iso/minikube-v1.21.0.iso...
I0706 13:38:37.759178 3065941 main.go:128] libmachine: (calium-m02) DBG | I0706 13:38:37.759078 3067188 common.go:108] Creating ssh key: /home/dcooley/.minikube/machines/calium-m02/id_rsa...
I0706 13:38:37.809469 3065941 main.go:128] libmachine: (calium-m02) DBG | I0706 13:38:37.809387 3067188 common.go:114] Creating raw disk image: /home/dcooley/.minikube/machines/calium-m02/calium-m02.rawdisk...
I0706 13:38:37.809488 3065941 main.go:128] libmachine: (calium-m02) DBG | Writing magic tar header
I0706 13:38:37.809516 3065941 main.go:128] libmachine: (calium-m02) DBG | Writing SSH key tar header
I0706 13:38:37.809524 3065941 main.go:128] libmachine: (calium-m02) DBG | I0706 13:38:37.809466 3067188 common.go:128] Fixing permissions on /home/dcooley/.minikube/machines/calium-m02 ...
I0706 13:38:37.809688 3065941 main.go:128] libmachine: (calium-m02) Setting executable bit set on /home/dcooley/.minikube/machines/calium-m02 (perms=drwx------)
I0706 13:38:37.809711 3065941 main.go:128] libmachine: (calium-m02) Setting executable bit set on /home/dcooley/.minikube/machines (perms=drwxr-xr-x)
I0706 13:38:37.809727 3065941 main.go:128] libmachine: (calium-m02) DBG | Checking permissions on dir: /home/dcooley/.minikube/machines/calium-m02
I0706 13:38:37.809741 3065941 main.go:128] libmachine: (calium-m02) DBG | Checking permissions on dir: /home/dcooley/.minikube/machines
I0706 13:38:37.809750 3065941 main.go:128] libmachine: (calium-m02) DBG | Checking permissions on dir: /home/dcooley/.minikube
I0706 13:38:37.809760 3065941 main.go:128] libmachine: (calium-m02) DBG | Checking permissions on dir: /home/dcooley
I0706 13:38:37.809768 3065941 main.go:128] libmachine: (calium-m02) DBG | Checking permissions on dir: /home
I0706 13:38:37.809781 3065941 main.go:128] libmachine: (calium-m02) DBG | Skipping /home - not owner
I0706 13:38:37.809804 3065941 main.go:128] libmachine: (calium-m02) Setting executable bit set on /home/dcooley/.minikube (perms=drwxr-xr-x)
I0706 13:38:37.809819 3065941 main.go:128] libmachine: (calium-m02) Setting executable bit set on /home/dcooley (perms=drwx--x--x)
I0706 13:38:37.809834 3065941 main.go:128] libmachine: (calium-m02) Creating domain...
I0706 13:38:37.878623 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined MAC address 52:54:00:f8:b4:ef in network default
I0706 13:38:37.879615 3065941 main.go:128] libmachine: (calium-m02) Ensuring networks are active...
I0706 13:38:37.879639 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:37.889649 3065941 main.go:128] libmachine: (calium-m02) Ensuring network default is active
I0706 13:38:37.889991 3065941 main.go:128] libmachine: (calium-m02) Ensuring network calium is active
I0706 13:38:37.890332 3065941 main.go:128] libmachine: (calium-m02) Getting domain xml...
I0706 13:38:37.899464 3065941 main.go:128] libmachine: (calium-m02) Creating domain...
I0706 13:38:38.378416 3065941 main.go:128] libmachine: (calium-m02) Waiting to get IP...
I0706 13:38:38.379703 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:38.380219 3065941 main.go:128] libmachine: (calium-m02) DBG | unable to find current IP address of domain calium-m02 in network calium
I0706 13:38:38.380240 3065941 main.go:128] libmachine: (calium-m02) DBG | I0706 13:38:38.380174 3067188 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
I0706 13:38:38.645587 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:38.646705 3065941 main.go:128] libmachine: (calium-m02) DBG | unable to find current IP address of domain calium-m02 in network calium
I0706 13:38:38.646723 3065941 main.go:128] libmachine: (calium-m02) DBG | I0706 13:38:38.646651 3067188 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
I0706 13:38:39.029520 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:39.030153 3065941 main.go:128] libmachine: (calium-m02) DBG | unable to find current IP address of domain calium-m02 in network calium
I0706 13:38:39.030173 3065941 main.go:128] libmachine: (calium-m02) DBG | I0706 13:38:39.030094 3067188 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
I0706 13:38:39.454929 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:39.455685 3065941 main.go:128] libmachine: (calium-m02) DBG | unable to find current IP address of domain calium-m02 in network calium
I0706 13:38:39.455714 3065941 main.go:128] libmachine: (calium-m02) DBG | I0706 13:38:39.455610 3067188 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
I0706 13:38:39.929972 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:39.930239 3065941 main.go:128] libmachine: (calium-m02) DBG | unable to find current IP address of domain calium-m02 in network calium
I0706 13:38:39.930252 3065941 main.go:128] libmachine: (calium-m02) DBG | I0706 13:38:39.930228 3067188 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
I0706 13:38:40.519301 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:40.519908 3065941 main.go:128] libmachine: (calium-m02) DBG | unable to find current IP address of domain calium-m02 in network calium
I0706 13:38:40.519939 3065941 main.go:128] libmachine: (calium-m02) DBG | I0706 13:38:40.519806 3067188 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
I0706 13:38:41.356035 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:41.356896 3065941 main.go:128] libmachine: (calium-m02) DBG | unable to find current IP address of domain calium-m02 in network calium
I0706 13:38:41.356928 3065941 main.go:128] libmachine: (calium-m02) DBG | I0706 13:38:41.356791 3067188 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
I0706 13:38:42.104602 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:42.105191 3065941 main.go:128] libmachine: (calium-m02) DBG | unable to find current IP address of domain calium-m02 in network calium
I0706 13:38:42.105213 3065941 main.go:128] libmachine: (calium-m02) DBG | I0706 13:38:42.105140 3067188 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
I0706 13:38:43.094054 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:43.094559 3065941 main.go:128] libmachine: (calium-m02) DBG | unable to find current IP address of domain calium-m02 in network calium
I0706 13:38:43.094582 3065941 main.go:128] libmachine: (calium-m02) DBG | I0706 13:38:43.094497 3067188 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
I0706 13:38:44.285779 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:44.286232 3065941 main.go:128] libmachine: (calium-m02) DBG | unable to find current IP address of domain calium-m02 in network calium
I0706 13:38:44.286255 3065941 main.go:128] libmachine: (calium-m02) DBG | I0706 13:38:44.286172 3067188 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
I0706 13:38:45.965367 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:45.965814 3065941 main.go:128] libmachine: (calium-m02) DBG | unable to find current IP address of domain calium-m02 in network calium
I0706 13:38:45.965823 3065941 main.go:128] libmachine: (calium-m02) DBG | I0706 13:38:45.965797 3067188 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
I0706 13:38:48.313683 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:48.314370 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has current primary IP address 192.168.50.51 and MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:48.314391 3065941 main.go:128] libmachine: (calium-m02) Found IP for machine: 192.168.50.51
I0706 13:38:48.314404 3065941 main.go:128] libmachine: (calium-m02) Reserving static IP address...
I0706 13:38:48.314884 3065941 main.go:128] libmachine: (calium-m02) DBG | unable to find host DHCP lease matching {name: "calium-m02", mac: "52:54:00:c7:0a:6d", ip: "192.168.50.51"} in network calium
I0706 13:38:48.473696 3065941 main.go:128] libmachine: (calium-m02) Reserved static IP address: 192.168.50.51
I0706 13:38:48.473707 3065941 main.go:128] libmachine: (calium-m02) Waiting for SSH to be available...
I0706 13:38:48.473712 3065941 main.go:128] libmachine: (calium-m02) DBG | Getting to WaitForSSH function...
I0706 13:38:48.490318 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:48.490608 3065941 main.go:128] libmachine: (calium-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0a:6d", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:38:46 -0700 PDT Type:0 Mac:52:54:00:c7:0a:6d Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c7:0a:6d}
I0706 13:38:48.490629 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined IP address 192.168.50.51 and MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:48.490694 3065941 main.go:128] libmachine: (calium-m02) DBG | Using SSH client type: external
I0706 13:38:48.490739 3065941 main.go:128] libmachine: (calium-m02) DBG | Using SSH private key: /home/dcooley/.minikube/machines/calium-m02/id_rsa (-rw-------)
I0706 13:38:48.490785 3065941 main.go:128] libmachine: (calium-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.51 -o IdentitiesOnly=yes -i /home/dcooley/.minikube/machines/calium-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
I0706 13:38:48.490808 3065941 main.go:128] libmachine: (calium-m02) DBG | About to run SSH command:
I0706 13:38:48.490824 3065941 main.go:128] libmachine: (calium-m02) DBG | exit 0
I0706 13:38:48.623951 3065941 main.go:128] libmachine: (calium-m02) DBG | SSH cmd err, output: <nil>:
I0706 13:38:48.624489 3065941 main.go:128] libmachine: (calium-m02) KVM machine creation complete!
I0706 13:38:48.624661 3065941 main.go:128] libmachine: (calium-m02) Calling .GetConfigRaw
I0706 13:38:48.625144 3065941 main.go:128] libmachine: (calium-m02) Calling .DriverName
I0706 13:38:48.625262 3065941 main.go:128] libmachine: (calium-m02) Calling .DriverName
I0706 13:38:48.625337 3065941 main.go:128] libmachine: Waiting for machine to be running, this may take a few minutes...
I0706 13:38:48.625342 3065941 main.go:128] libmachine: (calium-m02) Calling .GetState
I0706 13:38:48.632802 3065941 main.go:128] libmachine: Detecting operating system of created instance...
I0706 13:38:48.632815 3065941 main.go:128] libmachine: Waiting for SSH to be available...
I0706 13:38:48.632823 3065941 main.go:128] libmachine: Getting to WaitForSSH function...
I0706 13:38:48.632831 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHHostname
I0706 13:38:48.646524 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:48.646767 3065941 main.go:128] libmachine: (calium-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0a:6d", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:38:46 -0700 PDT Type:0 Mac:52:54:00:c7:0a:6d Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:calium-m02 Clientid:01:52:54:00:c7:0a:6d}
I0706 13:38:48.646784 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined IP address 192.168.50.51 and MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:48.646875 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHPort
I0706 13:38:48.647007 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHKeyPath
I0706 13:38:48.647092 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHKeyPath
I0706 13:38:48.647142 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHUsername
I0706 13:38:48.647287 3065941 main.go:128] libmachine: Using SSH client type: native
I0706 13:38:48.647404 3065941 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x55579c4154c0] 0x55579c415480 <nil> [] 0s} 192.168.50.51 22 <nil> <nil>}
I0706 13:38:48.647408 3065941 main.go:128] libmachine: About to run SSH command:
exit 0
I0706 13:38:48.770039 3065941 main.go:128] libmachine: SSH cmd err, output: <nil>:
I0706 13:38:48.770049 3065941 main.go:128] libmachine: Detecting the provisioner...
I0706 13:38:48.770054 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHHostname
I0706 13:38:48.785123 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:48.785265 3065941 main.go:128] libmachine: (calium-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0a:6d", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:38:46 -0700 PDT Type:0 Mac:52:54:00:c7:0a:6d Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:calium-m02 Clientid:01:52:54:00:c7:0a:6d}
I0706 13:38:48.785276 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined IP address 192.168.50.51 and MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:48.785422 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHPort
I0706 13:38:48.785590 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHKeyPath
I0706 13:38:48.785656 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHKeyPath
I0706 13:38:48.785724 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHUsername
I0706 13:38:48.785780 3065941 main.go:128] libmachine: Using SSH client type: native
I0706 13:38:48.785881 3065941 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x55579c4154c0] 0x55579c415480 <nil> [] 0s} 192.168.50.51 22 <nil> <nil>}
I0706 13:38:48.785885 3065941 main.go:128] libmachine: About to run SSH command:
cat /etc/os-release
I0706 13:38:48.930482 3065941 main.go:128] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2020.02.12
ID=buildroot
VERSION_ID=2020.02.12
PRETTY_NAME="Buildroot 2020.02.12"
I0706 13:38:48.930661 3065941 main.go:128] libmachine: found compatible host: buildroot
I0706 13:38:48.930668 3065941 main.go:128] libmachine: Provisioning with buildroot...
I0706 13:38:48.930675 3065941 main.go:128] libmachine: (calium-m02) Calling .GetMachineName
I0706 13:38:48.930979 3065941 buildroot.go:166] provisioning hostname "calium-m02"
I0706 13:38:48.930997 3065941 main.go:128] libmachine: (calium-m02) Calling .GetMachineName
I0706 13:38:48.931282 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHHostname
I0706 13:38:48.943791 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:48.944262 3065941 main.go:128] libmachine: (calium-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0a:6d", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:38:46 -0700 PDT Type:0 Mac:52:54:00:c7:0a:6d Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:calium-m02 Clientid:01:52:54:00:c7:0a:6d}
I0706 13:38:48.944278 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined IP address 192.168.50.51 and MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:48.944470 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHPort
I0706 13:38:48.944620 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHKeyPath
I0706 13:38:48.944704 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHKeyPath
I0706 13:38:48.944757 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHUsername
I0706 13:38:48.944832 3065941 main.go:128] libmachine: Using SSH client type: native
I0706 13:38:48.944958 3065941 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x55579c4154c0] 0x55579c415480 <nil> [] 0s} 192.168.50.51 22 <nil> <nil>}
I0706 13:38:48.944964 3065941 main.go:128] libmachine: About to run SSH command:
sudo hostname calium-m02 && echo "calium-m02" | sudo tee /etc/hostname
I0706 13:38:49.067471 3065941 main.go:128] libmachine: SSH cmd err, output: <nil>: calium-m02
I0706 13:38:49.067496 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHHostname
I0706 13:38:49.088848 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:49.089159 3065941 main.go:128] libmachine: (calium-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0a:6d", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:38:46 -0700 PDT Type:0 Mac:52:54:00:c7:0a:6d Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:calium-m02 Clientid:01:52:54:00:c7:0a:6d}
I0706 13:38:49.089185 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined IP address 192.168.50.51 and MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:49.089445 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHPort
I0706 13:38:49.089708 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHKeyPath
I0706 13:38:49.089837 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHKeyPath
I0706 13:38:49.089931 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHUsername
I0706 13:38:49.090031 3065941 main.go:128] libmachine: Using SSH client type: native
I0706 13:38:49.090239 3065941 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x55579c4154c0] 0x55579c415480 <nil> [] 0s} 192.168.50.51 22 <nil> <nil>}
I0706 13:38:49.090268 3065941 main.go:128] libmachine: About to run SSH command:
if ! grep -xq '.*\scalium-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calium-m02/g' /etc/hosts;
else
echo '127.0.1.1 calium-m02' | sudo tee -a /etc/hosts;
fi
fi
I0706 13:38:49.235539 3065941 main.go:128] libmachine: SSH cmd err, output: <nil>:
I0706 13:38:49.235557 3065941 buildroot.go:172] set auth options {CertDir:/home/dcooley/.minikube CaCertPath:/home/dcooley/.minikube/certs/ca.pem CaPrivateKeyPath:/home/dcooley/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/dcooley/.minikube/machines/server.pem ServerKeyPath:/home/dcooley/.minikube/machines/server-key.pem ClientKeyPath:/home/dcooley/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/dcooley/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/dcooley/.minikube}
I0706 13:38:49.235571 3065941 buildroot.go:174] setting up certificates
I0706 13:38:49.235578 3065941 provision.go:83] configureAuth start
I0706 13:38:49.235588 3065941 main.go:128] libmachine: (calium-m02) Calling .GetMachineName
I0706 13:38:49.235816 3065941 main.go:128] libmachine: (calium-m02) Calling .GetIP
I0706 13:38:49.246709 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:49.246917 3065941 main.go:128] libmachine: (calium-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0a:6d", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:38:46 -0700 PDT Type:0 Mac:52:54:00:c7:0a:6d Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:calium-m02 Clientid:01:52:54:00:c7:0a:6d}
I0706 13:38:49.246941 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined IP address 192.168.50.51 and MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:49.247057 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHHostname
I0706 13:38:49.258913 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:49.259152 3065941 main.go:128] libmachine: (calium-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0a:6d", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:38:46 -0700 PDT Type:0 Mac:52:54:00:c7:0a:6d Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:calium-m02 Clientid:01:52:54:00:c7:0a:6d}
I0706 13:38:49.259168 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined IP address 192.168.50.51 and MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:49.259293 3065941 provision.go:137] copyHostCerts
I0706 13:38:49.259328 3065941 exec_runner.go:145] found /home/dcooley/.minikube/key.pem, removing ...
I0706 13:38:49.259332 3065941 exec_runner.go:190] rm: /home/dcooley/.minikube/key.pem
I0706 13:38:49.259387 3065941 exec_runner.go:152] cp: /home/dcooley/.minikube/certs/key.pem --> /home/dcooley/.minikube/key.pem (1675 bytes)
I0706 13:38:49.259512 3065941 exec_runner.go:145] found /home/dcooley/.minikube/ca.pem, removing ...
I0706 13:38:49.259519 3065941 exec_runner.go:190] rm: /home/dcooley/.minikube/ca.pem
I0706 13:38:49.259558 3065941 exec_runner.go:152] cp: /home/dcooley/.minikube/certs/ca.pem --> /home/dcooley/.minikube/ca.pem (1078 bytes)
I0706 13:38:49.259619 3065941 exec_runner.go:145] found /home/dcooley/.minikube/cert.pem, removing ...
I0706 13:38:49.259621 3065941 exec_runner.go:190] rm: /home/dcooley/.minikube/cert.pem
I0706 13:38:49.259633 3065941 exec_runner.go:152] cp: /home/dcooley/.minikube/certs/cert.pem --> /home/dcooley/.minikube/cert.pem (1123 bytes)
I0706 13:38:49.259659 3065941 provision.go:111] generating server cert: /home/dcooley/.minikube/machines/server.pem ca-key=/home/dcooley/.minikube/certs/ca.pem private-key=/home/dcooley/.minikube/certs/ca-key.pem org=dcooley.calium-m02 san=[192.168.50.51 192.168.50.51 localhost 127.0.0.1 minikube calium-m02]
I0706 13:38:49.547752 3065941 provision.go:171] copyRemoteCerts
I0706 13:38:49.547823 3065941 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0706 13:38:49.547846 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHHostname
I0706 13:38:49.572653 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:49.573153 3065941 main.go:128] libmachine: (calium-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0a:6d", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:38:46 -0700 PDT Type:0 Mac:52:54:00:c7:0a:6d Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:calium-m02 Clientid:01:52:54:00:c7:0a:6d}
I0706 13:38:49.573178 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined IP address 192.168.50.51 and MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:49.573438 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHPort
I0706 13:38:49.573719 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHKeyPath
I0706 13:38:49.574037 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHUsername
I0706 13:38:49.574337 3065941 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/dcooley/.minikube/machines/calium-m02/id_rsa Username:docker}
I0706 13:38:49.666022 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0706 13:38:49.686838 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0706 13:38:49.704636 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0706 13:38:49.723014 3065941 provision.go:86] duration metric: configureAuth took 487.422568ms
I0706 13:38:49.723034 3065941 buildroot.go:189] setting minikube options for container-runtime
I0706 13:38:49.723302 3065941 main.go:128] libmachine: Checking connection to Docker...
I0706 13:38:49.723317 3065941 main.go:128] libmachine: (calium-m02) Calling .GetURL
I0706 13:38:49.732216 3065941 main.go:128] libmachine: (calium-m02) DBG | Using libvirt version 7003000
I0706 13:38:49.748554 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:49.748898 3065941 main.go:128] libmachine: (calium-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0a:6d", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:38:46 -0700 PDT Type:0 Mac:52:54:00:c7:0a:6d Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:calium-m02 Clientid:01:52:54:00:c7:0a:6d}
I0706 13:38:49.748934 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined IP address 192.168.50.51 and MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:49.749152 3065941 main.go:128] libmachine: Docker is up and running!
I0706 13:38:49.749178 3065941 main.go:128] libmachine: Reticulating splines...
I0706 13:38:49.749187 3065941 client.go:171] LocalClient.Create took 12.134900395s
I0706 13:38:49.749212 3065941 start.go:168] duration metric: libmachine.API.Create for "calium" took 12.134957962s
I0706 13:38:49.749221 3065941 start.go:267] post-start starting for "calium-m02" (driver="kvm2")
I0706 13:38:49.749228 3065941 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0706 13:38:49.749254 3065941 main.go:128] libmachine: (calium-m02) Calling .DriverName
I0706 13:38:49.749599 3065941 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0706 13:38:49.749622 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHHostname
I0706 13:38:49.766350 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:49.766610 3065941 main.go:128] libmachine: (calium-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0a:6d", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:38:46 -0700 PDT Type:0 Mac:52:54:00:c7:0a:6d Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:calium-m02 Clientid:01:52:54:00:c7:0a:6d}
I0706 13:38:49.766644 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined IP address 192.168.50.51 and MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:49.766719 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHPort
I0706 13:38:49.767025 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHKeyPath
I0706 13:38:49.767219 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHUsername
I0706 13:38:49.767493 3065941 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/dcooley/.minikube/machines/calium-m02/id_rsa Username:docker}
I0706 13:38:49.859584 3065941 ssh_runner.go:149] Run: cat /etc/os-release
I0706 13:38:49.863987 3065941 info.go:137] Remote host: Buildroot 2020.02.12
I0706 13:38:49.864009 3065941 filesync.go:126] Scanning /home/dcooley/.minikube/addons for local assets ...
I0706 13:38:49.864096 3065941 filesync.go:126] Scanning /home/dcooley/.minikube/files for local assets ...
I0706 13:38:49.864196 3065941 filesync.go:149] local asset: /home/dcooley/.minikube/files/etc/cni/net.d/87-podman-bridge.conflist -> 87-podman-bridge.conflist in /etc/cni/net.d
W0706 13:38:49.864211 3065941 vm_assets.go:106] NewFileAsset: /home/dcooley/.minikube/files/etc/cni/net.d/87-podman-bridge.conflist is an empty file!
I0706 13:38:49.864252 3065941 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
I0706 13:38:49.873060 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/files/etc/cni/net.d/87-podman-bridge.conflist --> /etc/cni/net.d/87-podman-bridge.conflist (0 bytes)
W0706 13:38:49.873081 3065941 ssh_runner.go:318] 0 byte asset: &{BaseAsset:{SourcePath:/home/dcooley/.minikube/files/etc/cni/net.d/87-podman-bridge.conflist TargetDir:/etc/cni/net.d TargetName:87-podman-bridge.conflist Permissions:0644 Source:} reader:0xc000e8a5a0 file:0xc001804050}
W0706 13:38:49.874231 3065941 ssh_runner.go:347] asked to copy a 0 byte asset: &{BaseAsset:{SourcePath:/home/dcooley/.minikube/files/etc/cni/net.d/87-podman-bridge.conflist TargetDir:/etc/cni/net.d TargetName:87-podman-bridge.conflist Permissions:0644 Source:} reader:0xc000e8a5a0 file:0xc001804050}
I0706 13:38:49.892897 3065941 start.go:270] post-start completed in 143.657732ms
I0706 13:38:49.892956 3065941 main.go:128] libmachine: (calium-m02) Calling .GetConfigRaw
I0706 13:38:49.893867 3065941 main.go:128] libmachine: (calium-m02) Calling .GetIP
I0706 13:38:49.916380 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:49.916814 3065941 main.go:128] libmachine: (calium-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0a:6d", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:38:46 -0700 PDT Type:0 Mac:52:54:00:c7:0a:6d Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:calium-m02 Clientid:01:52:54:00:c7:0a:6d}
I0706 13:38:49.916843 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined IP address 192.168.50.51 and MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:49.917413 3065941 profile.go:148] Saving config to /home/dcooley/.minikube/profiles/calium/config.json ...
I0706 13:38:49.917707 3065941 start.go:129] duration metric: createHost completed in 12.334469896s
I0706 13:38:49.917718 3065941 start.go:80] releasing machines lock for "calium-m02", held for 12.334585934s
I0706 13:38:49.917764 3065941 main.go:128] libmachine: (calium-m02) Calling .DriverName
I0706 13:38:49.918079 3065941 main.go:128] libmachine: (calium-m02) Calling .GetIP
I0706 13:38:49.935581 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:49.935954 3065941 main.go:128] libmachine: (calium-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0a:6d", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:38:46 -0700 PDT Type:0 Mac:52:54:00:c7:0a:6d Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:calium-m02 Clientid:01:52:54:00:c7:0a:6d}
I0706 13:38:49.935979 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined IP address 192.168.50.51 and MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:49.937330 3065941 out.go:170] 🌐 Found network options:
I0706 13:38:49.938478 3065941 out.go:170] ▪ NO_PROXY=192.168.50.9
W0706 13:38:49.938536 3065941 proxy.go:118] fail to check proxy env: Error ip not in block
I0706 13:38:49.938623 3065941 main.go:128] libmachine: (calium-m02) Calling .DriverName
I0706 13:38:49.938866 3065941 main.go:128] libmachine: (calium-m02) Calling .DriverName
I0706 13:38:49.939520 3065941 main.go:128] libmachine: (calium-m02) Calling .DriverName
W0706 13:38:49.939681 3065941 proxy.go:118] fail to check proxy env: Error ip not in block
I0706 13:38:49.939721 3065941 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime containerd
I0706 13:38:49.939785 3065941 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0706 13:38:49.939830 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHHostname
I0706 13:38:49.939851 3065941 ssh_runner.go:149] Run: sudo crictl images --output json
I0706 13:38:49.939870 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHHostname
I0706 13:38:49.958329 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:49.959162 3065941 main.go:128] libmachine: (calium-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0a:6d", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:38:46 -0700 PDT Type:0 Mac:52:54:00:c7:0a:6d Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:calium-m02 Clientid:01:52:54:00:c7:0a:6d}
I0706 13:38:49.959195 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined IP address 192.168.50.51 and MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:49.959333 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHPort
I0706 13:38:49.959542 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHKeyPath
I0706 13:38:49.959654 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHUsername
I0706 13:38:49.959749 3065941 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/dcooley/.minikube/machines/calium-m02/id_rsa Username:docker}
I0706 13:38:49.963081 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:49.963334 3065941 main.go:128] libmachine: (calium-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0a:6d", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:38:46 -0700 PDT Type:0 Mac:52:54:00:c7:0a:6d Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:calium-m02 Clientid:01:52:54:00:c7:0a:6d}
I0706 13:38:49.963361 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined IP address 192.168.50.51 and MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:38:49.963508 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHPort
I0706 13:38:49.963706 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHKeyPath
I0706 13:38:49.963896 3065941 main.go:128] libmachine: (calium-m02) Calling .GetSSHUsername
I0706 13:38:49.964147 3065941 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/dcooley/.minikube/machines/calium-m02/id_rsa Username:docker}
I0706 13:38:54.078279 3065941 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.138409293s)
I0706 13:38:54.078308 3065941 containerd.go:573] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.20.7". assuming images are not preloaded.
I0706 13:38:54.078357 3065941 ssh_runner.go:149] Run: which lz4
I0706 13:38:54.082058 3065941 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0706 13:38:54.085919 3065941 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/preloaded.tar.lz4': No such file or directory
I0706 13:38:54.085947 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (954876448 bytes)
I0706 13:38:55.579176 3065941 containerd.go:510] Took 1.497158 seconds to copy over tarball
I0706 13:38:55.579214 3065941 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0706 13:38:58.702854 3065941 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.123615303s)
I0706 13:38:58.702874 3065941 containerd.go:517] Took 3.123684 seconds t extract the tarball
I0706 13:38:58.702887 3065941 ssh_runner.go:100] rm: /preloaded.tar.lz4
I0706 13:38:58.743055 3065941 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0706 13:38:58.895041 3065941 ssh_runner.go:149] Run: sudo systemctl restart containerd
I0706 13:38:58.924272 3065941 ssh_runner.go:149] Run: sudo systemctl stop -f crio
I0706 13:38:58.950452 3065941 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
I0706 13:38:58.960209 3065941 docker.go:153] disabling docker service ...
I0706 13:38:58.960258 3065941 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
I0706 13:38:58.969558 3065941 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
E0706 13:38:58.977316 3065941 docker.go:159] "Failed to stop" err="sudo systemctl stop -f docker.service: Process exited with status 5\nstdout:\n\nstderr:\nFailed to stop docker.service: Unit docker.service not loaded.\n" service="docker.service"
W0706 13:38:58.977341 3065941 cruntime.go:236] disable failed: sudo systemctl stop -f docker.service: Process exited with status 5
stdout:
stderr:
Failed to stop docker.service: Unit docker.service not loaded.
I0706 13:38:58.977385 3065941 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
I0706 13:38:58.985626 3065941 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0706 13:38:58.993020 3065941 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKCltncnBjXQogIGFkZHJlc3MgPSAiL3J1bi9jb250YWluZXJkL2NvbnRhaW5lcmQuc29jayIKICB1aWQgPSAwCiAgZ2lkID0gMAogIG1heF9yZWN2X21lc3NhZ2Vfc2l6ZSA9IDE2Nzc3MjE2CiAgbWF4X3NlbmRfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKCltkZWJ1Z10KICBhZGRyZXNzID0gIiIKICB1aWQgPSAwCiAgZ2lkID0gMAogIGxldmVsID0gIiIKClttZXRyaWNzXQogIGFkZHJlc3MgPSAiIgogIGdycGNfaGlzdG9ncmFtID0gZmFsc2UKCltjZ3JvdXBdCiAgcGF0aCA9ICIiCgpbcGx1Z2luc10KICBbcGx1Z2lucy5jZ3JvdXBzXQogICAgbm9fcHJvbWV0aGV1cyA9IGZhbHNlCiAgW3BsdWdpbnMuY3JpXQogICAgc3RyZWFtX3NlcnZlcl9hZGRyZXNzID0gIiIKICAgIHN0cmVhbV9zZXJ2ZXJfcG9ydCA9ICIxMDAxMCIKICAgIGVuYWJsZV9zZWxpbnV4ID0gZmFsc2UKICAgIHNhbmRib3hfaW1hZ2UgPSAiazhzLmdjci5pby9wYXVzZTozLjIiCiAgICBzdGF0c19jb2xsZWN0X3BlcmlvZCA9IDEwCiAgICBzeXN0ZW1kX2Nncm91cCA9IGZhbHNlCiAgICBlbmFibGVfdGxzX3N0cmVhbWluZyA9IGZhbHNlCiAgICBtYXhfY29udGFpbmVyX2xvZ19saW5lX3NpemUgPSAxNjM4NAogICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmRdCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgbm9fcGl2b3QgPSB0cnVlCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW50aW1lLnYxLmxpbnV4IgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMubGludXhdCiAgICBzaGltID0gImNvbnRhaW5lcmQtc2hpbSIKICAgIHJ1bnRpbWUgPSAicnVuYyIKICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICBub19zaGltID0gZmFsc2UKICAgIHNoaW1fZGVidWcgPSBmYWxzZQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
I0706 13:38:59.007592 3065941 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0706 13:38:59.011975 3065941 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0706 13:38:59.012044 3065941 ssh_runner.go:149] Run: sudo modprobe br_netfilter
I0706 13:38:59.019760 3065941 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0706 13:38:59.023158 3065941 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0706 13:38:59.110067 3065941 ssh_runner.go:149] Run: sudo systemctl restart containerd
I0706 13:38:59.128464 3065941 start.go:381] Will wait 60s for socket path /run/containerd/containerd.sock
I0706 13:38:59.128520 3065941 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
I0706 13:38:59.131676 3065941 retry.go:31] will retry after 714.263872ms: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
I0706 13:38:59.846695 3065941 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
I0706 13:38:59.851098 3065941 start.go:406] Will wait 60s for crictl version
I0706 13:38:59.851148 3065941 ssh_runner.go:149] Run: sudo crictl version
I0706 13:38:59.860043 3065941 start.go:415] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.4.4
RuntimeApiVersion: v1alpha2
I0706 13:38:59.860098 3065941 ssh_runner.go:149] Run: containerd --version
I0706 13:39:00.016730 3065941 out.go:170] 📦 Preparing Kubernetes v1.20.7 on containerd 1.4.4 ...
I0706 13:39:00.018240 3065941 out.go:170] ▪ env NO_PROXY=192.168.50.9
I0706 13:39:00.018297 3065941 main.go:128] libmachine: (calium-m02) Calling .GetIP
I0706 13:39:00.038512 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:39:00.038787 3065941 main.go:128] libmachine: (calium-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0a:6d", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:38:46 -0700 PDT Type:0 Mac:52:54:00:c7:0a:6d Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:calium-m02 Clientid:01:52:54:00:c7:0a:6d}
I0706 13:39:00.038803 3065941 main.go:128] libmachine: (calium-m02) DBG | domain calium-m02 has defined IP address 192.168.50.51 and MAC address 52:54:00:c7:0a:6d in network calium
I0706 13:39:00.038982 3065941 ssh_runner.go:149] Run: grep 192.168.50.1 host.minikube.internal$ /etc/hosts
I0706 13:39:00.041272 3065941 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0706 13:39:00.048317 3065941 certs.go:52] Setting up /home/dcooley/.minikube/profiles/calium for IP: 192.168.50.51
I0706 13:39:00.048367 3065941 certs.go:179] skipping minikubeCA CA generation: /home/dcooley/.minikube/ca.key
I0706 13:39:00.048381 3065941 certs.go:179] skipping proxyClientCA CA generation: /home/dcooley/.minikube/proxy-client-ca.key
I0706 13:39:00.048469 3065941 certs.go:369] found cert: /home/dcooley/.minikube/certs/home/dcooley/.minikube/certs/ca-key.pem (1675 bytes)
I0706 13:39:00.048509 3065941 certs.go:369] found cert: /home/dcooley/.minikube/certs/home/dcooley/.minikube/certs/ca.pem (1078 bytes)
I0706 13:39:00.048535 3065941 certs.go:369] found cert: /home/dcooley/.minikube/certs/home/dcooley/.minikube/certs/cert.pem (1123 bytes)
I0706 13:39:00.048558 3065941 certs.go:369] found cert: /home/dcooley/.minikube/certs/home/dcooley/.minikube/certs/key.pem (1675 bytes)
I0706 13:39:00.049052 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0706 13:39:00.058216 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0706 13:39:00.069677 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0706 13:39:00.084675 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0706 13:39:00.097071 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0706 13:39:00.108529 3065941 ssh_runner.go:149] Run: openssl version
I0706 13:39:00.113642 3065941 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0706 13:39:00.119805 3065941 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0706 13:39:00.123433 3065941 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 May 27 16:26 /usr/share/ca-certificates/minikubeCA.pem
I0706 13:39:00.123481 3065941 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0706 13:39:00.128996 3065941 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0706 13:39:00.136343 3065941 ssh_runner.go:149] Run: sudo crictl info
I0706 13:39:00.155431 3065941 cni.go:93] Creating CNI manager for "calico"
I0706 13:39:00.155444 3065941 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0706 13:39:00.155459 3065941 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.51 APIServerPort:8443 KubernetesVersion:v1.20.7 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calium NodeName:calium-m02 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.9"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.50.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0706 13:39:00.155572 3065941 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.50.51
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "calium-m02"
kubeletExtraArgs:
node-ip: 192.168.50.51
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.50.9"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.7
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
I0706 13:39:00.155641 3065941 kubeadm.go:909] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.7/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=calium-m02 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.51 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.20.7 ClusterName:calium Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
I0706 13:39:00.155690 3065941 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.7
I0706 13:39:00.161034 3065941 binaries.go:44] Found k8s binaries, skipping transfer
I0706 13:39:00.161088 3065941 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0706 13:39:00.166235 3065941 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (525 bytes)
I0706 13:39:00.174429 3065941 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0706 13:39:00.182521 3065941 ssh_runner.go:149] Run: grep 192.168.50.9 control-plane.minikube.internal$ /etc/hosts
I0706 13:39:00.185557 3065941 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.9 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0706 13:39:00.191701 3065941 host.go:66] Checking if "calium" exists ...
I0706 13:39:00.192152 3065941 main.go:128] libmachine: Found binary path at /home/dcooley/.minikube/bin/docker-machine-driver-kvm2
I0706 13:39:00.192188 3065941 main.go:128] libmachine: Launching plugin server for driver kvm2
I0706 13:39:00.203502 3065941 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:41907
I0706 13:39:00.203958 3065941 main.go:128] libmachine: () Calling .GetVersion
I0706 13:39:00.204503 3065941 main.go:128] libmachine: Using API Version 1
I0706 13:39:00.204517 3065941 main.go:128] libmachine: () Calling .SetConfigRaw
I0706 13:39:00.204758 3065941 main.go:128] libmachine: () Calling .GetMachineName
I0706 13:39:00.204943 3065941 main.go:128] libmachine: (calium) Calling .DriverName
I0706 13:39:00.205054 3065941 start.go:229] JoinCluster: &{Name:calium KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.21.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:2560 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:calium Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.9 Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true} {Name:m02 IP:192.168.50.51 Port:0 KubernetesVersion:v1.20.7 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:calium MultiNodeRequested:true}
I0706 13:39:00.205136 3065941 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm token create --print-join-command --ttl=0"
I0706 13:39:00.205149 3065941 main.go:128] libmachine: (calium) Calling .GetSSHHostname
I0706 13:39:00.226057 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:39:00.226490 3065941 main.go:128] libmachine: (calium) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:db:49", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:37:46 -0700 PDT Type:0 Mac:52:54:00:f9:db:49 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:calium Clientid:01:52:54:00:f9:db:49}
I0706 13:39:00.226516 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined IP address 192.168.50.9 and MAC address 52:54:00:f9:db:49 in network calium
I0706 13:39:00.226683 3065941 main.go:128] libmachine: (calium) Calling .GetSSHPort
I0706 13:39:00.226850 3065941 main.go:128] libmachine: (calium) Calling .GetSSHKeyPath
I0706 13:39:00.226975 3065941 main.go:128] libmachine: (calium) Calling .GetSSHUsername
I0706 13:39:00.227102 3065941 sshutil.go:53] new ssh client: &{IP:192.168.50.9 Port:22 SSHKeyPath:/home/dcooley/.minikube/machines/calium/id_rsa Username:docker}
I0706 13:39:00.513320 3065941 start.go:250] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.50.51 Port:0 KubernetesVersion:v1.20.7 ControlPlane:false Worker:true}
I0706 13:39:00.513357 3065941 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm join control-plane.minikube.internal:8443 --token t58570.n57c88jh1fdo0c1r --discovery-token-ca-cert-hash sha256:44c5daed3e212d97c8d06d7b3521cf6076da926a38e5f6539ee6644c28695191 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=calium-m02"
I0706 13:39:13.289717 3065941 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm join control-plane.minikube.internal:8443 --token t58570.n57c88jh1fdo0c1r --discovery-token-ca-cert-hash sha256:44c5daed3e212d97c8d06d7b3521cf6076da926a38e5f6539ee6644c28695191 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=calium-m02": (12.776320723s)
I0706 13:39:13.289743 3065941 ssh_runner.go:149] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
I0706 13:39:13.499839 3065941 start.go:231] JoinCluster complete in 13.294779379s
I0706 13:39:13.499854 3065941 cni.go:93] Creating CNI manager for "calico"
I0706 13:39:13.499899 3065941 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.20.7/kubectl ...
I0706 13:39:13.499905 3065941 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (22544 bytes)
I0706 13:39:13.506833 3065941 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0706 13:39:13.968253 3065941 start.go:214] Will wait 6m0s for node &{Name:m02 IP:192.168.50.51 Port:0 KubernetesVersion:v1.20.7 ControlPlane:false Worker:true}
I0706 13:39:13.969171 3065941 out.go:170] 🔎 Verifying Kubernetes components...
I0706 13:39:13.969237 3065941 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0706 13:39:13.976877 3065941 kubeadm.go:547] duration metric: took 8.584436ms to wait for : map[apiserver:true system_pods:true] ...
I0706 13:39:13.976892 3065941 node_conditions.go:102] verifying NodePressure condition ...
I0706 13:39:13.980188 3065941 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0706 13:39:13.980205 3065941 node_conditions.go:123] node cpu capacity is 2
I0706 13:39:13.980215 3065941 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0706 13:39:13.980221 3065941 node_conditions.go:123] node cpu capacity is 2
I0706 13:39:13.980226 3065941 node_conditions.go:105] duration metric: took 3.329631ms to run NodePressure ...
I0706 13:39:13.980237 3065941 start.go:219] waiting for startup goroutines ...
I0706 13:39:13.981488 3065941 out.go:170]
I0706 13:39:13.981967 3065941 profile.go:148] Saving config to /home/dcooley/.minikube/profiles/calium/config.json ...
I0706 13:39:13.983079 3065941 out.go:170] 👍 Starting node calium-m03 in cluster calium
I0706 13:39:13.983103 3065941 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime containerd
I0706 13:39:13.983121 3065941 cache.go:54] Caching tarball of preloaded images
I0706 13:39:13.983260 3065941 preload.go:166] Found /home/dcooley/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0706 13:39:13.983282 3065941 cache.go:57] Finished verifying existence of preloaded tar for v1.20.7 on containerd
I0706 13:39:13.983409 3065941 profile.go:148] Saving config to /home/dcooley/.minikube/profiles/calium/config.json ...
I0706 13:39:13.983578 3065941 cache.go:202] Successfully downloaded all kic artifacts
I0706 13:39:13.983599 3065941 start.go:313] acquiring machines lock for calium-m03: {Name:mk713ea1bc47ea454143c3d059dd7f13e11b4c0a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0706 13:39:13.983656 3065941 start.go:317] acquired machines lock for "calium-m03" in 42.73µs
I0706 13:39:13.983668 3065941 start.go:89] Provisioning new machine with config: &{Name:calium KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.21.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:2560 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:calium Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.9 Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true} {Name:m02 IP:192.168.50.51 Port:0 KubernetesVersion:v1.20.7 ControlPlane:false Worker:true} {Name:m03 IP: Port:0 KubernetesVersion:v1.20.7 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:calium MultiNodeRequested:true} &{Name:m03 IP: Port:0 KubernetesVersion:v1.20.7 ControlPlane:false Worker:true}
I0706 13:39:13.983734 3065941 start.go:126] createHost starting for "m03" (driver="kvm2")
I0706 13:39:13.984910 3065941 out.go:197] 🔥 Creating kvm2 VM (CPUs=2, Memory=2560MB, Disk=20000MB) ...
I0706 13:39:13.985041 3065941 main.go:128] libmachine: Found binary path at /home/dcooley/.minikube/bin/docker-machine-driver-kvm2
I0706 13:39:13.985093 3065941 main.go:128] libmachine: Launching plugin server for driver kvm2
I0706 13:39:14.000053 3065941 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:34833
I0706 13:39:14.000939 3065941 main.go:128] libmachine: () Calling .GetVersion
I0706 13:39:14.001562 3065941 main.go:128] libmachine: Using API Version 1
I0706 13:39:14.001580 3065941 main.go:128] libmachine: () Calling .SetConfigRaw
I0706 13:39:14.002315 3065941 main.go:128] libmachine: () Calling .GetMachineName
I0706 13:39:14.002685 3065941 main.go:128] libmachine: (calium-m03) Calling .GetMachineName
I0706 13:39:14.002984 3065941 main.go:128] libmachine: (calium-m03) Calling .DriverName
I0706 13:39:14.003262 3065941 start.go:160] libmachine.API.Create for "calium" (driver="kvm2")
I0706 13:39:14.003291 3065941 client.go:168] LocalClient.Create starting
I0706 13:39:14.003333 3065941 main.go:128] libmachine: Reading certificate data from /home/dcooley/.minikube/certs/ca.pem
I0706 13:39:14.003377 3065941 main.go:128] libmachine: Decoding PEM data...
I0706 13:39:14.003396 3065941 main.go:128] libmachine: Parsing certificate...
I0706 13:39:14.003513 3065941 main.go:128] libmachine: Reading certificate data from /home/dcooley/.minikube/certs/cert.pem
I0706 13:39:14.003525 3065941 main.go:128] libmachine: Decoding PEM data...
I0706 13:39:14.003533 3065941 main.go:128] libmachine: Parsing certificate...
I0706 13:39:14.003572 3065941 main.go:128] libmachine: Running pre-create checks...
I0706 13:39:14.003578 3065941 main.go:128] libmachine: (calium-m03) Calling .PreCreateCheck
I0706 13:39:14.003861 3065941 main.go:128] libmachine: (calium-m03) Calling .GetConfigRaw
I0706 13:39:14.004721 3065941 main.go:128] libmachine: Creating machine...
I0706 13:39:14.004732 3065941 main.go:128] libmachine: (calium-m03) Calling .Create
I0706 13:39:14.005050 3065941 main.go:128] libmachine: (calium-m03) Creating KVM machine...
I0706 13:39:14.013248 3065941 main.go:128] libmachine: (calium-m03) DBG | found existing default KVM network
I0706 13:39:14.013299 3065941 main.go:128] libmachine: (calium-m03) DBG | found existing private KVM network calium
I0706 13:39:14.013350 3065941 main.go:128] libmachine: (calium-m03) Setting up store path in /home/dcooley/.minikube/machines/calium-m03 ...
I0706 13:39:14.013365 3065941 main.go:128] libmachine: (calium-m03) Building disk image from file:///home/dcooley/.minikube/cache/iso/minikube-v1.21.0.iso
I0706 13:39:14.013418 3065941 main.go:128] libmachine: (calium-m03) DBG | I0706 13:39:14.013349 3067954 common.go:101] Making disk image using store path: /home/dcooley/.minikube
I0706 13:39:14.013467 3065941 main.go:128] libmachine: (calium-m03) Downloading /home/dcooley/.minikube/cache/boot2docker.iso from file:///home/dcooley/.minikube/cache/iso/minikube-v1.21.0.iso...
I0706 13:39:14.132118 3065941 main.go:128] libmachine: (calium-m03) DBG | I0706 13:39:14.131929 3067954 common.go:108] Creating ssh key: /home/dcooley/.minikube/machines/calium-m03/id_rsa...
I0706 13:39:14.396588 3065941 main.go:128] libmachine: (calium-m03) DBG | I0706 13:39:14.396461 3067954 common.go:114] Creating raw disk image: /home/dcooley/.minikube/machines/calium-m03/calium-m03.rawdisk...
I0706 13:39:14.396619 3065941 main.go:128] libmachine: (calium-m03) DBG | Writing magic tar header
I0706 13:39:14.396635 3065941 main.go:128] libmachine: (calium-m03) Setting executable bit set on /home/dcooley/.minikube/machines/calium-m03 (perms=drwx------)
I0706 13:39:14.396652 3065941 main.go:128] libmachine: (calium-m03) Setting executable bit set on /home/dcooley/.minikube/machines (perms=drwxr-xr-x)
I0706 13:39:14.396662 3065941 main.go:128] libmachine: (calium-m03) Setting executable bit set on /home/dcooley/.minikube (perms=drwxr-xr-x)
I0706 13:39:14.396677 3065941 main.go:128] libmachine: (calium-m03) Setting executable bit set on /home/dcooley (perms=drwx--x--x)
I0706 13:39:14.396687 3065941 main.go:128] libmachine: (calium-m03) Creating domain...
I0706 13:39:14.396701 3065941 main.go:128] libmachine: (calium-m03) DBG | Writing SSH key tar header
I0706 13:39:14.396721 3065941 main.go:128] libmachine: (calium-m03) DBG | I0706 13:39:14.396540 3067954 common.go:128] Fixing permissions on /home/dcooley/.minikube/machines/calium-m03 ...
I0706 13:39:14.396732 3065941 main.go:128] libmachine: (calium-m03) DBG | Checking permissions on dir: /home/dcooley/.minikube/machines/calium-m03
I0706 13:39:14.396747 3065941 main.go:128] libmachine: (calium-m03) DBG | Checking permissions on dir: /home/dcooley/.minikube/machines
I0706 13:39:14.396764 3065941 main.go:128] libmachine: (calium-m03) DBG | Checking permissions on dir: /home/dcooley/.minikube
I0706 13:39:14.396776 3065941 main.go:128] libmachine: (calium-m03) DBG | Checking permissions on dir: /home/dcooley
I0706 13:39:14.396785 3065941 main.go:128] libmachine: (calium-m03) DBG | Checking permissions on dir: /home
I0706 13:39:14.396799 3065941 main.go:128] libmachine: (calium-m03) DBG | Skipping /home - not owner
I0706 13:39:14.478107 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined MAC address 52:54:00:d0:ff:e6 in network default
I0706 13:39:14.479013 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:14.479033 3065941 main.go:128] libmachine: (calium-m03) Ensuring networks are active...
I0706 13:39:14.489112 3065941 main.go:128] libmachine: (calium-m03) Ensuring network default is active
I0706 13:39:14.489431 3065941 main.go:128] libmachine: (calium-m03) Ensuring network calium is active
I0706 13:39:14.489877 3065941 main.go:128] libmachine: (calium-m03) Getting domain xml...
I0706 13:39:14.497854 3065941 main.go:128] libmachine: (calium-m03) Creating domain...
I0706 13:39:14.836577 3065941 main.go:128] libmachine: (calium-m03) Waiting to get IP...
I0706 13:39:14.837695 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:14.838238 3065941 main.go:128] libmachine: (calium-m03) DBG | unable to find current IP address of domain calium-m03 in network calium
I0706 13:39:14.838271 3065941 main.go:128] libmachine: (calium-m03) DBG | I0706 13:39:14.838212 3067954 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
I0706 13:39:15.102892 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:15.103354 3065941 main.go:128] libmachine: (calium-m03) DBG | unable to find current IP address of domain calium-m03 in network calium
I0706 13:39:15.103392 3065941 main.go:128] libmachine: (calium-m03) DBG | I0706 13:39:15.103326 3067954 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
I0706 13:39:15.486445 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:15.487008 3065941 main.go:128] libmachine: (calium-m03) DBG | unable to find current IP address of domain calium-m03 in network calium
I0706 13:39:15.487031 3065941 main.go:128] libmachine: (calium-m03) DBG | I0706 13:39:15.486962 3067954 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
I0706 13:39:15.911290 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:15.911649 3065941 main.go:128] libmachine: (calium-m03) DBG | unable to find current IP address of domain calium-m03 in network calium
I0706 13:39:15.911664 3065941 main.go:128] libmachine: (calium-m03) DBG | I0706 13:39:15.911632 3067954 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
I0706 13:39:16.386417 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:16.387170 3065941 main.go:128] libmachine: (calium-m03) DBG | unable to find current IP address of domain calium-m03 in network calium
I0706 13:39:16.387189 3065941 main.go:128] libmachine: (calium-m03) DBG | I0706 13:39:16.387118 3067954 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
I0706 13:39:16.976043 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:16.976376 3065941 main.go:128] libmachine: (calium-m03) DBG | unable to find current IP address of domain calium-m03 in network calium
I0706 13:39:16.976396 3065941 main.go:128] libmachine: (calium-m03) DBG | I0706 13:39:16.976350 3067954 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
I0706 13:39:17.812064 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:17.812813 3065941 main.go:128] libmachine: (calium-m03) DBG | unable to find current IP address of domain calium-m03 in network calium
I0706 13:39:17.812827 3065941 main.go:128] libmachine: (calium-m03) DBG | I0706 13:39:17.812794 3067954 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
I0706 13:39:18.560900 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:18.561801 3065941 main.go:128] libmachine: (calium-m03) DBG | unable to find current IP address of domain calium-m03 in network calium
I0706 13:39:18.561820 3065941 main.go:128] libmachine: (calium-m03) DBG | I0706 13:39:18.561745 3067954 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
I0706 13:39:19.550414 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:19.550980 3065941 main.go:128] libmachine: (calium-m03) DBG | unable to find current IP address of domain calium-m03 in network calium
I0706 13:39:19.551023 3065941 main.go:128] libmachine: (calium-m03) DBG | I0706 13:39:19.550858 3067954 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
I0706 13:39:20.743093 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:20.743546 3065941 main.go:128] libmachine: (calium-m03) DBG | unable to find current IP address of domain calium-m03 in network calium
I0706 13:39:20.743558 3065941 main.go:128] libmachine: (calium-m03) DBG | I0706 13:39:20.743520 3067954 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
I0706 13:39:22.424291 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:22.425276 3065941 main.go:128] libmachine: (calium-m03) DBG | unable to find current IP address of domain calium-m03 in network calium
I0706 13:39:22.425319 3065941 main.go:128] libmachine: (calium-m03) DBG | I0706 13:39:22.425148 3067954 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
I0706 13:39:24.773159 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:24.773757 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has current primary IP address 192.168.50.221 and MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:24.773820 3065941 main.go:128] libmachine: (calium-m03) Found IP for machine: 192.168.50.221
I0706 13:39:24.773854 3065941 main.go:128] libmachine: (calium-m03) Reserving static IP address...
I0706 13:39:24.774401 3065941 main.go:128] libmachine: (calium-m03) DBG | unable to find host DHCP lease matching {name: "calium-m03", mac: "52:54:00:97:96:6a", ip: "192.168.50.221"} in network calium
I0706 13:39:24.941168 3065941 main.go:128] libmachine: (calium-m03) Reserved static IP address: 192.168.50.221
I0706 13:39:24.941183 3065941 main.go:128] libmachine: (calium-m03) DBG | Getting to WaitForSSH function...
I0706 13:39:24.941193 3065941 main.go:128] libmachine: (calium-m03) Waiting for SSH to be available...
I0706 13:39:24.951803 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:24.951950 3065941 main.go:128] libmachine: (calium-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:96:6a", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:39:22 -0700 PDT Type:0 Mac:52:54:00:97:96:6a Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:minikube Clientid:01:52:54:00:97:96:6a}
I0706 13:39:24.951959 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:24.952016 3065941 main.go:128] libmachine: (calium-m03) DBG | Using SSH client type: external
I0706 13:39:24.952025 3065941 main.go:128] libmachine: (calium-m03) DBG | Using SSH private key: /home/dcooley/.minikube/machines/calium-m03/id_rsa (-rw-------)
I0706 13:39:24.952043 3065941 main.go:128] libmachine: (calium-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.221 -o IdentitiesOnly=yes -i /home/dcooley/.minikube/machines/calium-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
I0706 13:39:24.952048 3065941 main.go:128] libmachine: (calium-m03) DBG | About to run SSH command:
I0706 13:39:24.952054 3065941 main.go:128] libmachine: (calium-m03) DBG | exit 0
I0706 13:39:25.084468 3065941 main.go:128] libmachine: (calium-m03) DBG | SSH cmd err, output: <nil>:
I0706 13:39:25.085120 3065941 main.go:128] libmachine: (calium-m03) KVM machine creation complete!
I0706 13:39:25.085221 3065941 main.go:128] libmachine: (calium-m03) Calling .GetConfigRaw
I0706 13:39:25.085800 3065941 main.go:128] libmachine: (calium-m03) Calling .DriverName
I0706 13:39:25.085955 3065941 main.go:128] libmachine: (calium-m03) Calling .DriverName
I0706 13:39:25.086117 3065941 main.go:128] libmachine: Waiting for machine to be running, this may take a few minutes...
I0706 13:39:25.086128 3065941 main.go:128] libmachine: (calium-m03) Calling .GetState
I0706 13:39:25.096337 3065941 main.go:128] libmachine: Detecting operating system of created instance...
I0706 13:39:25.096350 3065941 main.go:128] libmachine: Waiting for SSH to be available...
I0706 13:39:25.096359 3065941 main.go:128] libmachine: Getting to WaitForSSH function...
I0706 13:39:25.096367 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHHostname
I0706 13:39:25.117988 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:25.118360 3065941 main.go:128] libmachine: (calium-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:96:6a", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:39:22 -0700 PDT Type:0 Mac:52:54:00:97:96:6a Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:calium-m03 Clientid:01:52:54:00:97:96:6a}
I0706 13:39:25.118382 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:25.118555 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHPort
I0706 13:39:25.118706 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHKeyPath
I0706 13:39:25.118802 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHKeyPath
I0706 13:39:25.118873 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHUsername
I0706 13:39:25.118966 3065941 main.go:128] libmachine: Using SSH client type: native
I0706 13:39:25.119133 3065941 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x55579c4154c0] 0x55579c415480 <nil> [] 0s} 192.168.50.221 22 <nil> <nil>}
I0706 13:39:25.119142 3065941 main.go:128] libmachine: About to run SSH command:
exit 0
I0706 13:39:25.256752 3065941 main.go:128] libmachine: SSH cmd err, output: <nil>:
I0706 13:39:25.256771 3065941 main.go:128] libmachine: Detecting the provisioner...
I0706 13:39:25.256785 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHHostname
I0706 13:39:25.278022 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:25.278611 3065941 main.go:128] libmachine: (calium-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:96:6a", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:39:22 -0700 PDT Type:0 Mac:52:54:00:97:96:6a Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:calium-m03 Clientid:01:52:54:00:97:96:6a}
I0706 13:39:25.278650 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:25.278984 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHPort
I0706 13:39:25.279217 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHKeyPath
I0706 13:39:25.279387 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHKeyPath
I0706 13:39:25.279574 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHUsername
I0706 13:39:25.279811 3065941 main.go:128] libmachine: Using SSH client type: native
I0706 13:39:25.280050 3065941 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x55579c4154c0] 0x55579c415480 <nil> [] 0s} 192.168.50.221 22 <nil> <nil>}
I0706 13:39:25.280073 3065941 main.go:128] libmachine: About to run SSH command:
cat /etc/os-release
I0706 13:39:25.401009 3065941 main.go:128] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2020.02.12
ID=buildroot
VERSION_ID=2020.02.12
PRETTY_NAME="Buildroot 2020.02.12"
I0706 13:39:25.401070 3065941 main.go:128] libmachine: found compatible host: buildroot
I0706 13:39:25.401078 3065941 main.go:128] libmachine: Provisioning with buildroot...
I0706 13:39:25.401087 3065941 main.go:128] libmachine: (calium-m03) Calling .GetMachineName
I0706 13:39:25.401431 3065941 buildroot.go:166] provisioning hostname "calium-m03"
I0706 13:39:25.401452 3065941 main.go:128] libmachine: (calium-m03) Calling .GetMachineName
I0706 13:39:25.401631 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHHostname
I0706 13:39:25.416481 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:25.416790 3065941 main.go:128] libmachine: (calium-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:96:6a", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:39:22 -0700 PDT Type:0 Mac:52:54:00:97:96:6a Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:calium-m03 Clientid:01:52:54:00:97:96:6a}
I0706 13:39:25.416813 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:25.416926 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHPort
I0706 13:39:25.417092 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHKeyPath
I0706 13:39:25.417259 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHKeyPath
I0706 13:39:25.417440 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHUsername
I0706 13:39:25.417634 3065941 main.go:128] libmachine: Using SSH client type: native
I0706 13:39:25.417812 3065941 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x55579c4154c0] 0x55579c415480 <nil> [] 0s} 192.168.50.221 22 <nil> <nil>}
I0706 13:39:25.417824 3065941 main.go:128] libmachine: About to run SSH command:
sudo hostname calium-m03 && echo "calium-m03" | sudo tee /etc/hostname
I0706 13:39:25.535160 3065941 main.go:128] libmachine: SSH cmd err, output: <nil>: calium-m03
I0706 13:39:25.535176 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHHostname
I0706 13:39:25.551421 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:25.551747 3065941 main.go:128] libmachine: (calium-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:96:6a", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:39:22 -0700 PDT Type:0 Mac:52:54:00:97:96:6a Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:calium-m03 Clientid:01:52:54:00:97:96:6a}
I0706 13:39:25.551760 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:25.551986 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHPort
I0706 13:39:25.552170 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHKeyPath
I0706 13:39:25.552352 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHKeyPath
I0706 13:39:25.552517 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHUsername
I0706 13:39:25.552683 3065941 main.go:128] libmachine: Using SSH client type: native
I0706 13:39:25.552851 3065941 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x55579c4154c0] 0x55579c415480 <nil> [] 0s} 192.168.50.221 22 <nil> <nil>}
I0706 13:39:25.552874 3065941 main.go:128] libmachine: About to run SSH command:
if ! grep -xq '.*\scalium-m03' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calium-m03/g' /etc/hosts;
else
echo '127.0.1.1 calium-m03' | sudo tee -a /etc/hosts;
fi
fi
I0706 13:39:25.718811 3065941 main.go:128] libmachine: SSH cmd err, output: <nil>:
I0706 13:39:25.718828 3065941 buildroot.go:172] set auth options {CertDir:/home/dcooley/.minikube CaCertPath:/home/dcooley/.minikube/certs/ca.pem CaPrivateKeyPath:/home/dcooley/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/dcooley/.minikube/machines/server.pem ServerKeyPath:/home/dcooley/.minikube/machines/server-key.pem ClientKeyPath:/home/dcooley/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/dcooley/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/dcooley/.minikube}
I0706 13:39:25.718845 3065941 buildroot.go:174] setting up certificates
I0706 13:39:25.718852 3065941 provision.go:83] configureAuth start
I0706 13:39:25.718862 3065941 main.go:128] libmachine: (calium-m03) Calling .GetMachineName
I0706 13:39:25.719038 3065941 main.go:128] libmachine: (calium-m03) Calling .GetIP
I0706 13:39:25.735837 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:25.736203 3065941 main.go:128] libmachine: (calium-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:96:6a", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:39:22 -0700 PDT Type:0 Mac:52:54:00:97:96:6a Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:calium-m03 Clientid:01:52:54:00:97:96:6a}
I0706 13:39:25.736225 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:25.736396 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHHostname
I0706 13:39:25.750531 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:25.750857 3065941 main.go:128] libmachine: (calium-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:96:6a", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:39:22 -0700 PDT Type:0 Mac:52:54:00:97:96:6a Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:calium-m03 Clientid:01:52:54:00:97:96:6a}
I0706 13:39:25.750872 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:25.751011 3065941 provision.go:137] copyHostCerts
I0706 13:39:25.751060 3065941 exec_runner.go:145] found /home/dcooley/.minikube/key.pem, removing ...
I0706 13:39:25.751066 3065941 exec_runner.go:190] rm: /home/dcooley/.minikube/key.pem
I0706 13:39:25.751135 3065941 exec_runner.go:152] cp: /home/dcooley/.minikube/certs/key.pem --> /home/dcooley/.minikube/key.pem (1675 bytes)
I0706 13:39:25.751229 3065941 exec_runner.go:145] found /home/dcooley/.minikube/ca.pem, removing ...
I0706 13:39:25.751234 3065941 exec_runner.go:190] rm: /home/dcooley/.minikube/ca.pem
I0706 13:39:25.751266 3065941 exec_runner.go:152] cp: /home/dcooley/.minikube/certs/ca.pem --> /home/dcooley/.minikube/ca.pem (1078 bytes)
I0706 13:39:25.751329 3065941 exec_runner.go:145] found /home/dcooley/.minikube/cert.pem, removing ...
I0706 13:39:25.751333 3065941 exec_runner.go:190] rm: /home/dcooley/.minikube/cert.pem
I0706 13:39:25.751363 3065941 exec_runner.go:152] cp: /home/dcooley/.minikube/certs/cert.pem --> /home/dcooley/.minikube/cert.pem (1123 bytes)
I0706 13:39:25.751443 3065941 provision.go:111] generating server cert: /home/dcooley/.minikube/machines/server.pem ca-key=/home/dcooley/.minikube/certs/ca.pem private-key=/home/dcooley/.minikube/certs/ca-key.pem org=dcooley.calium-m03 san=[192.168.50.221 192.168.50.221 localhost 127.0.0.1 minikube calium-m03]
I0706 13:39:25.860591 3065941 provision.go:171] copyRemoteCerts
I0706 13:39:25.860630 3065941 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0706 13:39:25.860645 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHHostname
I0706 13:39:25.876350 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:25.876655 3065941 main.go:128] libmachine: (calium-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:96:6a", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:39:22 -0700 PDT Type:0 Mac:52:54:00:97:96:6a Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:calium-m03 Clientid:01:52:54:00:97:96:6a}
I0706 13:39:25.876665 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:25.876837 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHPort
I0706 13:39:25.876916 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHKeyPath
I0706 13:39:25.876993 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHUsername
I0706 13:39:25.877046 3065941 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/dcooley/.minikube/machines/calium-m03/id_rsa Username:docker}
I0706 13:39:25.971062 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0706 13:39:25.979592 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0706 13:39:25.988542 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0706 13:39:26.003348 3065941 provision.go:86] duration metric: configureAuth took 284.485904ms
I0706 13:39:26.003364 3065941 buildroot.go:189] setting minikube options for container-runtime
I0706 13:39:26.003511 3065941 main.go:128] libmachine: Checking connection to Docker...
I0706 13:39:26.003519 3065941 main.go:128] libmachine: (calium-m03) Calling .GetURL
I0706 13:39:26.012117 3065941 main.go:128] libmachine: (calium-m03) DBG | Using libvirt version 7003000
I0706 13:39:26.028237 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:26.028554 3065941 main.go:128] libmachine: (calium-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:96:6a", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:39:22 -0700 PDT Type:0 Mac:52:54:00:97:96:6a Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:calium-m03 Clientid:01:52:54:00:97:96:6a}
I0706 13:39:26.028577 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:26.028699 3065941 main.go:128] libmachine: Docker is up and running!
I0706 13:39:26.028704 3065941 main.go:128] libmachine: Reticulating splines...
I0706 13:39:26.028709 3065941 client.go:171] LocalClient.Create took 12.025414037s
I0706 13:39:26.028719 3065941 start.go:168] duration metric: libmachine.API.Create for "calium" took 12.025466917s
I0706 13:39:26.028723 3065941 start.go:267] post-start starting for "calium-m03" (driver="kvm2")
I0706 13:39:26.028726 3065941 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0706 13:39:26.028737 3065941 main.go:128] libmachine: (calium-m03) Calling .DriverName
I0706 13:39:26.028891 3065941 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0706 13:39:26.028911 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHHostname
I0706 13:39:26.043614 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:26.043792 3065941 main.go:128] libmachine: (calium-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:96:6a", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:39:22 -0700 PDT Type:0 Mac:52:54:00:97:96:6a Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:calium-m03 Clientid:01:52:54:00:97:96:6a}
I0706 13:39:26.043811 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:26.043970 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHPort
I0706 13:39:26.044168 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHKeyPath
I0706 13:39:26.044259 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHUsername
I0706 13:39:26.044325 3065941 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/dcooley/.minikube/machines/calium-m03/id_rsa Username:docker}
I0706 13:39:26.139833 3065941 ssh_runner.go:149] Run: cat /etc/os-release
I0706 13:39:26.144431 3065941 info.go:137] Remote host: Buildroot 2020.02.12
I0706 13:39:26.144458 3065941 filesync.go:126] Scanning /home/dcooley/.minikube/addons for local assets ...
I0706 13:39:26.144545 3065941 filesync.go:126] Scanning /home/dcooley/.minikube/files for local assets ...
I0706 13:39:26.144653 3065941 filesync.go:149] local asset: /home/dcooley/.minikube/files/etc/cni/net.d/87-podman-bridge.conflist -> 87-podman-bridge.conflist in /etc/cni/net.d
W0706 13:39:26.144670 3065941 vm_assets.go:106] NewFileAsset: /home/dcooley/.minikube/files/etc/cni/net.d/87-podman-bridge.conflist is an empty file!
I0706 13:39:26.144706 3065941 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
I0706 13:39:26.151633 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/files/etc/cni/net.d/87-podman-bridge.conflist --> /etc/cni/net.d/87-podman-bridge.conflist (0 bytes)
W0706 13:39:26.151649 3065941 ssh_runner.go:318] 0 byte asset: &{BaseAsset:{SourcePath:/home/dcooley/.minikube/files/etc/cni/net.d/87-podman-bridge.conflist TargetDir:/etc/cni/net.d TargetName:87-podman-bridge.conflist Permissions:0644 Source:} reader:0xc00126ce70 file:0xc0018042b0}
W0706 13:39:26.152735 3065941 ssh_runner.go:347] asked to copy a 0 byte asset: &{BaseAsset:{SourcePath:/home/dcooley/.minikube/files/etc/cni/net.d/87-podman-bridge.conflist TargetDir:/etc/cni/net.d TargetName:87-podman-bridge.conflist Permissions:0644 Source:} reader:0xc00126ce70 file:0xc0018042b0}
I0706 13:39:26.171428 3065941 start.go:270] post-start completed in 142.694815ms
I0706 13:39:26.171461 3065941 main.go:128] libmachine: (calium-m03) Calling .GetConfigRaw
I0706 13:39:26.172208 3065941 main.go:128] libmachine: (calium-m03) Calling .GetIP
I0706 13:39:26.188906 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:26.189366 3065941 main.go:128] libmachine: (calium-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:96:6a", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:39:22 -0700 PDT Type:0 Mac:52:54:00:97:96:6a Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:calium-m03 Clientid:01:52:54:00:97:96:6a}
I0706 13:39:26.189395 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:26.189730 3065941 profile.go:148] Saving config to /home/dcooley/.minikube/profiles/calium/config.json ...
I0706 13:39:26.190063 3065941 start.go:129] duration metric: createHost completed in 12.206318277s
I0706 13:39:26.190074 3065941 start.go:80] releasing machines lock for "calium-m03", held for 12.20641062s
I0706 13:39:26.190146 3065941 main.go:128] libmachine: (calium-m03) Calling .DriverName
I0706 13:39:26.190381 3065941 main.go:128] libmachine: (calium-m03) Calling .GetIP
I0706 13:39:26.205723 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:26.206024 3065941 main.go:128] libmachine: (calium-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:96:6a", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:39:22 -0700 PDT Type:0 Mac:52:54:00:97:96:6a Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:calium-m03 Clientid:01:52:54:00:97:96:6a}
I0706 13:39:26.206034 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:26.207428 3065941 out.go:170] 🌐 Found network options:
I0706 13:39:26.208361 3065941 out.go:170] ▪ NO_PROXY=192.168.50.9,192.168.50.51
W0706 13:39:26.208458 3065941 proxy.go:118] fail to check proxy env: Error ip not in block
W0706 13:39:26.208481 3065941 proxy.go:118] fail to check proxy env: Error ip not in block
I0706 13:39:26.208514 3065941 main.go:128] libmachine: (calium-m03) Calling .DriverName
I0706 13:39:26.208798 3065941 main.go:128] libmachine: (calium-m03) Calling .DriverName
I0706 13:39:26.209311 3065941 main.go:128] libmachine: (calium-m03) Calling .DriverName
W0706 13:39:26.209624 3065941 proxy.go:118] fail to check proxy env: Error ip not in block
W0706 13:39:26.209660 3065941 proxy.go:118] fail to check proxy env: Error ip not in block
I0706 13:39:26.209716 3065941 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime containerd
I0706 13:39:26.209725 3065941 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0706 13:39:26.209774 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHHostname
I0706 13:39:26.209839 3065941 ssh_runner.go:149] Run: sudo crictl images --output json
I0706 13:39:26.209864 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHHostname
I0706 13:39:26.233634 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:26.234085 3065941 main.go:128] libmachine: (calium-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:96:6a", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:39:22 -0700 PDT Type:0 Mac:52:54:00:97:96:6a Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:calium-m03 Clientid:01:52:54:00:97:96:6a}
I0706 13:39:26.234127 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:26.234236 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHPort
I0706 13:39:26.234401 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHKeyPath
I0706 13:39:26.234540 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHUsername
I0706 13:39:26.234691 3065941 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/dcooley/.minikube/machines/calium-m03/id_rsa Username:docker}
I0706 13:39:26.239653 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:26.240067 3065941 main.go:128] libmachine: (calium-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:96:6a", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:39:22 -0700 PDT Type:0 Mac:52:54:00:97:96:6a Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:calium-m03 Clientid:01:52:54:00:97:96:6a}
I0706 13:39:26.240086 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:26.240317 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHPort
I0706 13:39:26.240701 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHKeyPath
I0706 13:39:26.240935 3065941 main.go:128] libmachine: (calium-m03) Calling .GetSSHUsername
I0706 13:39:26.241186 3065941 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/dcooley/.minikube/machines/calium-m03/id_rsa Username:docker}
I0706 13:39:30.357761 3065941 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.147895077s)
I0706 13:39:30.357806 3065941 containerd.go:573] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.20.7". assuming images are not preloaded.
I0706 13:39:30.357869 3065941 ssh_runner.go:149] Run: which lz4
I0706 13:39:30.362394 3065941 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0706 13:39:30.367825 3065941 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/preloaded.tar.lz4': No such file or directory
I0706 13:39:30.367855 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (954876448 bytes)
I0706 13:39:31.825853 3065941 containerd.go:510] Took 1.463535 seconds to copy over tarball
I0706 13:39:31.825912 3065941 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0706 13:39:35.202784 3065941 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.37685178s)
I0706 13:39:35.202796 3065941 containerd.go:517] Took 3.376929 seconds t extract the tarball
I0706 13:39:35.202803 3065941 ssh_runner.go:100] rm: /preloaded.tar.lz4
I0706 13:39:35.249057 3065941 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0706 13:39:35.387806 3065941 ssh_runner.go:149] Run: sudo systemctl restart containerd
I0706 13:39:35.414724 3065941 ssh_runner.go:149] Run: sudo systemctl stop -f crio
I0706 13:39:35.436690 3065941 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
I0706 13:39:35.442447 3065941 docker.go:153] disabling docker service ...
I0706 13:39:35.442500 3065941 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
I0706 13:39:35.448421 3065941 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
E0706 13:39:35.456861 3065941 docker.go:159] "Failed to stop" err="sudo systemctl stop -f docker.service: Process exited with status 5\nstdout:\n\nstderr:\nFailed to stop docker.service: Unit docker.service not loaded.\n" service="docker.service"
W0706 13:39:35.456888 3065941 cruntime.go:236] disable failed: sudo systemctl stop -f docker.service: Process exited with status 5
stdout:
stderr:
Failed to stop docker.service: Unit docker.service not loaded.
I0706 13:39:35.456947 3065941 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
I0706 13:39:35.464369 3065941 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0706 13:39:35.471630 3065941 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKCltncnBjXQogIGFkZHJlc3MgPSAiL3J1bi9jb250YWluZXJkL2NvbnRhaW5lcmQuc29jayIKICB1aWQgPSAwCiAgZ2lkID0gMAogIG1heF9yZWN2X21lc3NhZ2Vfc2l6ZSA9IDE2Nzc3MjE2CiAgbWF4X3NlbmRfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKCltkZWJ1Z10KICBhZGRyZXNzID0gIiIKICB1aWQgPSAwCiAgZ2lkID0gMAogIGxldmVsID0gIiIKClttZXRyaWNzXQogIGFkZHJlc3MgPSAiIgogIGdycGNfaGlzdG9ncmFtID0gZmFsc2UKCltjZ3JvdXBdCiAgcGF0aCA9ICIiCgpbcGx1Z2luc10KICBbcGx1Z2lucy5jZ3JvdXBzXQogICAgbm9fcHJvbWV0aGV1cyA9IGZhbHNlCiAgW3BsdWdpbnMuY3JpXQogICAgc3RyZWFtX3NlcnZlcl9hZGRyZXNzID0gIiIKICAgIHN0cmVhbV9zZXJ2ZXJfcG9ydCA9ICIxMDAxMCIKICAgIGVuYWJsZV9zZWxpbnV4ID0gZmFsc2UKICAgIHNhbmRib3hfaW1hZ2UgPSAiazhzLmdjci5pby9wYXVzZTozLjIiCiAgICBzdGF0c19jb2xsZWN0X3BlcmlvZCA9IDEwCiAgICBzeXN0ZW1kX2Nncm91cCA9IGZhbHNlCiAgICBlbmFibGVfdGxzX3N0cmVhbWluZyA9IGZhbHNlCiAgICBtYXhfY29udGFpbmVyX2xvZ19saW5lX3NpemUgPSAxNjM4NAogICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmRdCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgbm9fcGl2b3QgPSB0cnVlCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW50aW1lLnYxLmxpbnV4IgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMubGludXhdCiAgICBzaGltID0gImNvbnRhaW5lcmQtc2hpbSIKICAgIHJ1bnRpbWUgPSAicnVuYyIKICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICBub19zaGltID0gZmFsc2UKICAgIHNoaW1fZGVidWcgPSBmYWxzZQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
I0706 13:39:35.479740 3065941 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0706 13:39:35.482843 3065941 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0706 13:39:35.482894 3065941 ssh_runner.go:149] Run: sudo modprobe br_netfilter
I0706 13:39:35.490448 3065941 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0706 13:39:35.495447 3065941 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0706 13:39:35.579191 3065941 ssh_runner.go:149] Run: sudo systemctl restart containerd
I0706 13:39:35.594236 3065941 start.go:381] Will wait 60s for socket path /run/containerd/containerd.sock
I0706 13:39:35.594288 3065941 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
I0706 13:39:35.599888 3065941 retry.go:31] will retry after 880.657189ms: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
I0706 13:39:36.481697 3065941 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
I0706 13:39:36.487726 3065941 start.go:406] Will wait 60s for crictl version
I0706 13:39:36.487802 3065941 ssh_runner.go:149] Run: sudo crictl version
I0706 13:39:36.500017 3065941 start.go:415] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.4.4
RuntimeApiVersion: v1alpha2
I0706 13:39:36.500087 3065941 ssh_runner.go:149] Run: containerd --version
I0706 13:39:36.522714 3065941 out.go:170] 📦 Preparing Kubernetes v1.20.7 on containerd 1.4.4 ...
I0706 13:39:36.523667 3065941 out.go:170] ▪ env NO_PROXY=192.168.50.9
I0706 13:39:36.524574 3065941 out.go:170] ▪ env NO_PROXY=192.168.50.9,192.168.50.51
I0706 13:39:36.524608 3065941 main.go:128] libmachine: (calium-m03) Calling .GetIP
I0706 13:39:36.541770 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:36.542130 3065941 main.go:128] libmachine: (calium-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:96:6a", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:39:22 -0700 PDT Type:0 Mac:52:54:00:97:96:6a Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:calium-m03 Clientid:01:52:54:00:97:96:6a}
I0706 13:39:36.542154 3065941 main.go:128] libmachine: (calium-m03) DBG | domain calium-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:97:96:6a in network calium
I0706 13:39:36.542321 3065941 ssh_runner.go:149] Run: grep 192.168.50.1 host.minikube.internal$ /etc/hosts
I0706 13:39:36.546356 3065941 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0706 13:39:36.556900 3065941 certs.go:52] Setting up /home/dcooley/.minikube/profiles/calium for IP: 192.168.50.221
I0706 13:39:36.556955 3065941 certs.go:179] skipping minikubeCA CA generation: /home/dcooley/.minikube/ca.key
I0706 13:39:36.556967 3065941 certs.go:179] skipping proxyClientCA CA generation: /home/dcooley/.minikube/proxy-client-ca.key
I0706 13:39:36.557038 3065941 certs.go:369] found cert: /home/dcooley/.minikube/certs/home/dcooley/.minikube/certs/ca-key.pem (1675 bytes)
I0706 13:39:36.557070 3065941 certs.go:369] found cert: /home/dcooley/.minikube/certs/home/dcooley/.minikube/certs/ca.pem (1078 bytes)
I0706 13:39:36.557090 3065941 certs.go:369] found cert: /home/dcooley/.minikube/certs/home/dcooley/.minikube/certs/cert.pem (1123 bytes)
I0706 13:39:36.557108 3065941 certs.go:369] found cert: /home/dcooley/.minikube/certs/home/dcooley/.minikube/certs/key.pem (1675 bytes)
I0706 13:39:36.557554 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0706 13:39:36.575214 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0706 13:39:36.591852 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0706 13:39:36.608315 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0706 13:39:36.626000 3065941 ssh_runner.go:316] scp /home/dcooley/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0706 13:39:36.643031 3065941 ssh_runner.go:149] Run: openssl version
I0706 13:39:36.648943 3065941 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0706 13:39:36.656000 3065941 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0706 13:39:36.659984 3065941 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 May 27 16:26 /usr/share/ca-certificates/minikubeCA.pem
I0706 13:39:36.660100 3065941 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0706 13:39:36.665795 3065941 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0706 13:39:36.673031 3065941 ssh_runner.go:149] Run: sudo crictl info
I0706 13:39:36.692179 3065941 cni.go:93] Creating CNI manager for "calico"
I0706 13:39:36.692188 3065941 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0706 13:39:36.692195 3065941 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.221 APIServerPort:8443 KubernetesVersion:v1.20.7 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calium NodeName:calium-m03 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.9"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.50.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0706 13:39:36.692264 3065941 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.50.221
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "calium-m03"
kubeletExtraArgs:
node-ip: 192.168.50.221
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.50.9"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.7
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
I0706 13:39:36.692302 3065941 kubeadm.go:909] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.7/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=calium-m03 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.221 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.20.7 ClusterName:calium Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
I0706 13:39:36.692338 3065941 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.7
I0706 13:39:36.698578 3065941 binaries.go:44] Found k8s binaries, skipping transfer
I0706 13:39:36.698646 3065941 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0706 13:39:36.703937 3065941 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (526 bytes)
I0706 13:39:36.716450 3065941 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0706 13:39:36.727504 3065941 ssh_runner.go:149] Run: grep 192.168.50.9 control-plane.minikube.internal$ /etc/hosts
I0706 13:39:36.730442 3065941 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.9 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0706 13:39:36.739271 3065941 host.go:66] Checking if "calium" exists ...
I0706 13:39:36.739813 3065941 main.go:128] libmachine: Found binary path at /home/dcooley/.minikube/bin/docker-machine-driver-kvm2
I0706 13:39:36.739860 3065941 main.go:128] libmachine: Launching plugin server for driver kvm2
I0706 13:39:36.761813 3065941 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:38219
I0706 13:39:36.762429 3065941 main.go:128] libmachine: () Calling .GetVersion
I0706 13:39:36.763002 3065941 main.go:128] libmachine: Using API Version 1
I0706 13:39:36.763019 3065941 main.go:128] libmachine: () Calling .SetConfigRaw
I0706 13:39:36.763327 3065941 main.go:128] libmachine: () Calling .GetMachineName
I0706 13:39:36.763456 3065941 main.go:128] libmachine: (calium) Calling .DriverName
I0706 13:39:36.763554 3065941 start.go:229] JoinCluster: &{Name:calium KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.21.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:2560 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:calium Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.9 Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true} {Name:m02 IP:192.168.50.51 Port:0 KubernetesVersion:v1.20.7 ControlPlane:false Worker:true} {Name:m03 IP:192.168.50.221 Port:0 KubernetesVersion:v1.20.7 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:calium MultiNodeRequested:true}
I0706 13:39:36.763665 3065941 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm token create --print-join-command --ttl=0"
I0706 13:39:36.763686 3065941 main.go:128] libmachine: (calium) Calling .GetSSHHostname
I0706 13:39:36.776803 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined MAC address 52:54:00:f9:db:49 in network calium
I0706 13:39:36.777126 3065941 main.go:128] libmachine: (calium) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:db:49", ip: ""} in network calium: {Iface:virbr2 ExpiryTime:2021-07-06 14:37:46 -0700 PDT Type:0 Mac:52:54:00:f9:db:49 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:calium Clientid:01:52:54:00:f9:db:49}
I0706 13:39:36.777149 3065941 main.go:128] libmachine: (calium) DBG | domain calium has defined IP address 192.168.50.9 and MAC address 52:54:00:f9:db:49 in network calium
I0706 13:39:36.777367 3065941 main.go:128] libmachine: (calium) Calling .GetSSHPort
I0706 13:39:36.777666 3065941 main.go:128] libmachine: (calium) Calling .GetSSHKeyPath
I0706 13:39:36.777863 3065941 main.go:128] libmachine: (calium) Calling .GetSSHUsername
I0706 13:39:36.777997 3065941 sshutil.go:53] new ssh client: &{IP:192.168.50.9 Port:22 SSHKeyPath:/home/dcooley/.minikube/machines/calium/id_rsa Username:docker}
I0706 13:39:37.019878 3065941 start.go:250] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.50.221 Port:0 KubernetesVersion:v1.20.7 ControlPlane:false Worker:true}
I0706 13:39:37.019913 3065941 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm join control-plane.minikube.internal:8443 --token 9miig3.94qwpuhak09zpr70 --discovery-token-ca-cert-hash sha256:44c5daed3e212d97c8d06d7b3521cf6076da926a38e5f6539ee6644c28695191 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=calium-m03"
I0706 13:39:49.586109 3065941 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm join control-plane.minikube.internal:8443 --token 9miig3.94qwpuhak09zpr70 --discovery-token-ca-cert-hash sha256:44c5daed3e212d97c8d06d7b3521cf6076da926a38e5f6539ee6644c28695191 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=calium-m03": (12.566176012s)
I0706 13:39:49.586133 3065941 ssh_runner.go:149] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
I0706 13:39:49.807596 3065941 start.go:231] JoinCluster complete in 13.044034314s
I0706 13:39:49.807619 3065941 cni.go:93] Creating CNI manager for "calico"
I0706 13:39:49.807674 3065941 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.20.7/kubectl ...
I0706 13:39:49.807687 3065941 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (22544 bytes)
I0706 13:39:49.818491 3065941 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0706 13:39:50.056373 3065941 start.go:214] Will wait 6m0s for node &{Name:m03 IP:192.168.50.221 Port:0 KubernetesVersion:v1.20.7 ControlPlane:false Worker:true}
I0706 13:39:50.057443 3065941 out.go:170] 🔎 Verifying Kubernetes components...
I0706 13:39:50.057565 3065941 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0706 13:39:50.067148 3065941 kubeadm.go:547] duration metric: took 10.743119ms to wait for : map[apiserver:true system_pods:true] ...
I0706 13:39:50.067197 3065941 node_conditions.go:102] verifying NodePressure condition ...
I0706 13:39:50.071641 3065941 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0706 13:39:50.071658 3065941 node_conditions.go:123] node cpu capacity is 2
I0706 13:39:50.071668 3065941 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0706 13:39:50.071673 3065941 node_conditions.go:123] node cpu capacity is 2
I0706 13:39:50.071678 3065941 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0706 13:39:50.071683 3065941 node_conditions.go:123] node cpu capacity is 2
I0706 13:39:50.071687 3065941 node_conditions.go:105] duration metric: took 4.48513ms to run NodePressure ...
I0706 13:39:50.071697 3065941 start.go:219] waiting for startup goroutines ...
I0706 13:39:50.114346 3065941 start.go:463] kubectl: 1.21.2, cluster: 1.20.7 (minor skew: 1)
I0706 13:39:50.115544 3065941 out.go:170] 🏄 Done! kubectl is now configured to use "calium" cluster and "default" namespace by default
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
f0e5ead431590 bbad212163da5 21 minutes ago Running cilium-agent 0 0791cca303356
7c6ad97edc025 bbad212163da5 21 minutes ago Exited clean-cilium-state 0 0791cca303356
7790a608f724a bbad212163da5 21 minutes ago Exited mount-cgroup 0 0791cca303356
abcb8fb5160a8 ac08a3af350bd 26 minutes ago Running calico-kube-controllers 0 7f08717729885
a6f120dee4037 04a9b816c7535 26 minutes ago Running calico-node 0 fa56ab392d3b7
21bc71ba4dcd7 6e38f40d628db 26 minutes ago Running storage-provisioner 0 8adf99184504d
3a015cc73c8ca 7f93af2e7e114 26 minutes ago Exited flexvol-driver 0 fa56ab392d3b7
0db198bf980d2 35a7136bc71a7 26 minutes ago Exited install-cni 0 fa56ab392d3b7
4a7264cc7c277 35a7136bc71a7 26 minutes ago Exited upgrade-ipam 0 fa56ab392d3b7
de64726cb4778 ff54c88b8ecfa 26 minutes ago Running kube-proxy 0 049df38d39bc4
5241f584ee3f1 0369cf4303ffd 27 minutes ago Running etcd 0 84c9b3fe2f306
c613519a1ff9a 38f903b540101 27 minutes ago Running kube-scheduler 0 1fb42ab45e9db
bbe2a780c24fa 22d1a2072ec7b 27 minutes ago Running kube-controller-manager 0 2ef03e93c3df5
739bdc4c8f670 034671b24f0f1 27 minutes ago Running kube-apiserver 0 ea399b565ee24
*
* ==> containerd <==
* -- Logs begin at Tue 2021-07-06 20:37:43 UTC, end at Tue 2021-07-06 21:05:22 UTC. --
Jul 06 21:04:46 calium containerd[2163]: time="2021-07-06T21:04:46.303919299Z" level=info msg="ExecSync for \"a6f120dee40373221a6df913aa6c8754a27034b3b4c84ab42a0a719af6fa16f1\" with command [/bin/calico-node -felix-ready -bird-ready] and timeout 1 (s)"
Jul 06 21:04:46 calium containerd[2163]: time="2021-07-06T21:04:46.408137674Z" level=info msg="Exec process \"a3cf2e6dcf8f4df6d61ec6f4984bee6ce5c0afc185fe9c941734e7fde15ec8b6\" exits with exit code 0 and error <nil>"
Jul 06 21:04:46 calium containerd[2163]: time="2021-07-06T21:04:46.408375944Z" level=info msg="Finish piping \"stderr\" of container exec \"a3cf2e6dcf8f4df6d61ec6f4984bee6ce5c0afc185fe9c941734e7fde15ec8b6\""
Jul 06 21:04:46 calium containerd[2163]: time="2021-07-06T21:04:46.408397505Z" level=info msg="Finish piping \"stdout\" of container exec \"a3cf2e6dcf8f4df6d61ec6f4984bee6ce5c0afc185fe9c941734e7fde15ec8b6\""
Jul 06 21:04:46 calium containerd[2163]: time="2021-07-06T21:04:46.410152794Z" level=info msg="ExecSync for \"a6f120dee40373221a6df913aa6c8754a27034b3b4c84ab42a0a719af6fa16f1\" returns with exit code 0"
Jul 06 21:04:46 calium containerd[2163]: time="2021-07-06T21:04:46.849210290Z" level=info msg="ExecSync for \"abcb8fb5160a8a424ab91eb65129f61ef06503bb693a46d6f2ce03a0b5bbec6d\" with command [/usr/bin/check-status -r] and timeout 1 (s)"
Jul 06 21:04:46 calium containerd[2163]: time="2021-07-06T21:04:46.872583445Z" level=info msg="ExecSync for \"a6f120dee40373221a6df913aa6c8754a27034b3b4c84ab42a0a719af6fa16f1\" with command [/bin/calico-node -felix-live -bird-live] and timeout 1 (s)"
Jul 06 21:04:46 calium containerd[2163]: time="2021-07-06T21:04:46.935617555Z" level=info msg="Exec process \"dceb9d33114bc3e5935a7ba882adba2b25f40deda5d78080bea045604e2806c7\" exits with exit code 0 and error <nil>"
Jul 06 21:04:46 calium containerd[2163]: time="2021-07-06T21:04:46.935667850Z" level=info msg="Finish piping \"stderr\" of container exec \"dceb9d33114bc3e5935a7ba882adba2b25f40deda5d78080bea045604e2806c7\""
Jul 06 21:04:46 calium containerd[2163]: time="2021-07-06T21:04:46.935845637Z" level=info msg="Finish piping \"stdout\" of container exec \"dceb9d33114bc3e5935a7ba882adba2b25f40deda5d78080bea045604e2806c7\""
Jul 06 21:04:46 calium containerd[2163]: time="2021-07-06T21:04:46.937281612Z" level=info msg="ExecSync for \"abcb8fb5160a8a424ab91eb65129f61ef06503bb693a46d6f2ce03a0b5bbec6d\" returns with exit code 0"
Jul 06 21:04:46 calium containerd[2163]: time="2021-07-06T21:04:46.980301979Z" level=info msg="Finish piping \"stderr\" of container exec \"0bf324bd8668d38e41a997c97b01b2eac753fa493ff78fb883e364a1b01d4a44\""
Jul 06 21:04:46 calium containerd[2163]: time="2021-07-06T21:04:46.980454888Z" level=info msg="Finish piping \"stdout\" of container exec \"0bf324bd8668d38e41a997c97b01b2eac753fa493ff78fb883e364a1b01d4a44\""
Jul 06 21:04:46 calium containerd[2163]: time="2021-07-06T21:04:46.980853752Z" level=info msg="Exec process \"0bf324bd8668d38e41a997c97b01b2eac753fa493ff78fb883e364a1b01d4a44\" exits with exit code 0 and error <nil>"
Jul 06 21:04:46 calium containerd[2163]: time="2021-07-06T21:04:46.981631392Z" level=info msg="ExecSync for \"a6f120dee40373221a6df913aa6c8754a27034b3b4c84ab42a0a719af6fa16f1\" returns with exit code 0"
Jul 06 21:04:56 calium containerd[2163]: time="2021-07-06T21:04:56.304182198Z" level=info msg="ExecSync for \"a6f120dee40373221a6df913aa6c8754a27034b3b4c84ab42a0a719af6fa16f1\" with command [/bin/calico-node -felix-ready -bird-ready] and timeout 1 (s)"
Jul 06 21:04:56 calium containerd[2163]: time="2021-07-06T21:04:56.403932914Z" level=info msg="Finish piping \"stderr\" of container exec \"abfcfcb66765837b2570ee339e59787686b24423897b42eca03d45e7e685de90\""
Jul 06 21:04:56 calium containerd[2163]: time="2021-07-06T21:04:56.404108837Z" level=info msg="Finish piping \"stdout\" of container exec \"abfcfcb66765837b2570ee339e59787686b24423897b42eca03d45e7e685de90\""
Jul 06 21:04:56 calium containerd[2163]: time="2021-07-06T21:04:56.404236368Z" level=info msg="Exec process \"abfcfcb66765837b2570ee339e59787686b24423897b42eca03d45e7e685de90\" exits with exit code 0 and error <nil>"
Jul 06 21:04:56 calium containerd[2163]: time="2021-07-06T21:04:56.405058372Z" level=info msg="ExecSync for \"a6f120dee40373221a6df913aa6c8754a27034b3b4c84ab42a0a719af6fa16f1\" returns with exit code 0"
Jul 06 21:04:56 calium containerd[2163]: time="2021-07-06T21:04:56.849253379Z" level=info msg="ExecSync for \"abcb8fb5160a8a424ab91eb65129f61ef06503bb693a46d6f2ce03a0b5bbec6d\" with command [/usr/bin/check-status -r] and timeout 1 (s)"
Jul 06 21:04:56 calium containerd[2163]: time="2021-07-06T21:04:56.872103400Z" level=info msg="ExecSync for \"a6f120dee40373221a6df913aa6c8754a27034b3b4c84ab42a0a719af6fa16f1\" with command [/bin/calico-node -felix-live -bird-live] and timeout 1 (s)"
Jul 06 21:04:56 calium containerd[2163]: time="2021-07-06T21:04:56.932741452Z" level=info msg="Exec process \"f5e51558011e80be0d30137e2179d9dde1df9a4f337e87a65a82313490acdf08\" exits with exit code 0 and error <nil>"
Jul 06 21:04:56 calium containerd[2163]: time="2021-07-06T21:04:56.932832003Z" level=info msg="Finish piping \"stdout\" of container exec \"f5e51558011e80be0d30137e2179d9dde1df9a4f337e87a65a82313490acdf08\""
Jul 06 21:04:56 calium containerd[2163]: time="2021-07-06T21:04:56.932890043Z" level=info msg="Finish piping \"stderr\" of container exec \"f5e51558011e80be0d30137e2179d9dde1df9a4f337e87a65a82313490acdf08\""
Jul 06 21:04:56 calium containerd[2163]: time="2021-07-06T21:04:56.934827767Z" level=info msg="ExecSync for \"abcb8fb5160a8a424ab91eb65129f61ef06503bb693a46d6f2ce03a0b5bbec6d\" returns with exit code 0"
Jul 06 21:04:56 calium containerd[2163]: time="2021-07-06T21:04:56.967275990Z" level=info msg="Finish piping \"stderr\" of container exec \"e321358bbf7908a6e7028daef9ea3e1f11dba072f0ce2f1ae751b7d1d14cff05\""
Jul 06 21:04:56 calium containerd[2163]: time="2021-07-06T21:04:56.967475477Z" level=info msg="Finish piping \"stdout\" of container exec \"e321358bbf7908a6e7028daef9ea3e1f11dba072f0ce2f1ae751b7d1d14cff05\""
Jul 06 21:04:56 calium containerd[2163]: time="2021-07-06T21:04:56.967514110Z" level=info msg="Exec process \"e321358bbf7908a6e7028daef9ea3e1f11dba072f0ce2f1ae751b7d1d14cff05\" exits with exit code 0 and error <nil>"
Jul 06 21:04:56 calium containerd[2163]: time="2021-07-06T21:04:56.968576148Z" level=info msg="ExecSync for \"a6f120dee40373221a6df913aa6c8754a27034b3b4c84ab42a0a719af6fa16f1\" returns with exit code 0"
Jul 06 21:05:06 calium containerd[2163]: time="2021-07-06T21:05:06.304073669Z" level=info msg="ExecSync for \"a6f120dee40373221a6df913aa6c8754a27034b3b4c84ab42a0a719af6fa16f1\" with command [/bin/calico-node -felix-ready -bird-ready] and timeout 1 (s)"
Jul 06 21:05:06 calium containerd[2163]: time="2021-07-06T21:05:06.420223460Z" level=info msg="Finish piping \"stderr\" of container exec \"c142f00ad6be555682721bda8c562e102865c94440587d00d92c79989fb072c7\""
Jul 06 21:05:06 calium containerd[2163]: time="2021-07-06T21:05:06.420566920Z" level=info msg="Finish piping \"stdout\" of container exec \"c142f00ad6be555682721bda8c562e102865c94440587d00d92c79989fb072c7\""
Jul 06 21:05:06 calium containerd[2163]: time="2021-07-06T21:05:06.421103404Z" level=info msg="Exec process \"c142f00ad6be555682721bda8c562e102865c94440587d00d92c79989fb072c7\" exits with exit code 0 and error <nil>"
Jul 06 21:05:06 calium containerd[2163]: time="2021-07-06T21:05:06.423033604Z" level=info msg="ExecSync for \"a6f120dee40373221a6df913aa6c8754a27034b3b4c84ab42a0a719af6fa16f1\" returns with exit code 0"
Jul 06 21:05:06 calium containerd[2163]: time="2021-07-06T21:05:06.849411700Z" level=info msg="ExecSync for \"abcb8fb5160a8a424ab91eb65129f61ef06503bb693a46d6f2ce03a0b5bbec6d\" with command [/usr/bin/check-status -r] and timeout 1 (s)"
Jul 06 21:05:06 calium containerd[2163]: time="2021-07-06T21:05:06.872257708Z" level=info msg="ExecSync for \"a6f120dee40373221a6df913aa6c8754a27034b3b4c84ab42a0a719af6fa16f1\" with command [/bin/calico-node -felix-live -bird-live] and timeout 1 (s)"
Jul 06 21:05:06 calium containerd[2163]: time="2021-07-06T21:05:06.935783041Z" level=info msg="Exec process \"6b781e25ad68f4b56102d312661c329e53b44408d0a3385eebc5d7ae8e75a704\" exits with exit code 0 and error <nil>"
Jul 06 21:05:06 calium containerd[2163]: time="2021-07-06T21:05:06.935927886Z" level=info msg="Finish piping \"stderr\" of container exec \"6b781e25ad68f4b56102d312661c329e53b44408d0a3385eebc5d7ae8e75a704\""
Jul 06 21:05:06 calium containerd[2163]: time="2021-07-06T21:05:06.935927685Z" level=info msg="Finish piping \"stdout\" of container exec \"6b781e25ad68f4b56102d312661c329e53b44408d0a3385eebc5d7ae8e75a704\""
Jul 06 21:05:06 calium containerd[2163]: time="2021-07-06T21:05:06.936667584Z" level=info msg="ExecSync for \"abcb8fb5160a8a424ab91eb65129f61ef06503bb693a46d6f2ce03a0b5bbec6d\" returns with exit code 0"
Jul 06 21:05:06 calium containerd[2163]: time="2021-07-06T21:05:06.969504498Z" level=info msg="Finish piping \"stderr\" of container exec \"a699b4b192863662be2b428ce13b05638c9b7ca448a783ded1f9ba6943ad2f22\""
Jul 06 21:05:06 calium containerd[2163]: time="2021-07-06T21:05:06.969504438Z" level=info msg="Exec process \"a699b4b192863662be2b428ce13b05638c9b7ca448a783ded1f9ba6943ad2f22\" exits with exit code 0 and error <nil>"
Jul 06 21:05:06 calium containerd[2163]: time="2021-07-06T21:05:06.969531298Z" level=info msg="Finish piping \"stdout\" of container exec \"a699b4b192863662be2b428ce13b05638c9b7ca448a783ded1f9ba6943ad2f22\""
Jul 06 21:05:06 calium containerd[2163]: time="2021-07-06T21:05:06.970266078Z" level=info msg="ExecSync for \"a6f120dee40373221a6df913aa6c8754a27034b3b4c84ab42a0a719af6fa16f1\" returns with exit code 0"
Jul 06 21:05:16 calium containerd[2163]: time="2021-07-06T21:05:16.303912189Z" level=info msg="ExecSync for \"a6f120dee40373221a6df913aa6c8754a27034b3b4c84ab42a0a719af6fa16f1\" with command [/bin/calico-node -felix-ready -bird-ready] and timeout 1 (s)"
Jul 06 21:05:16 calium containerd[2163]: time="2021-07-06T21:05:16.366147534Z" level=info msg="Exec process \"b98969de37b6ad90c5007d6ac179b7db8d48806ef629959584f37ed47b7b4dfa\" exits with exit code 0 and error <nil>"
Jul 06 21:05:16 calium containerd[2163]: time="2021-07-06T21:05:16.366512154Z" level=info msg="Finish piping \"stderr\" of container exec \"b98969de37b6ad90c5007d6ac179b7db8d48806ef629959584f37ed47b7b4dfa\""
Jul 06 21:05:16 calium containerd[2163]: time="2021-07-06T21:05:16.366513075Z" level=info msg="Finish piping \"stdout\" of container exec \"b98969de37b6ad90c5007d6ac179b7db8d48806ef629959584f37ed47b7b4dfa\""
Jul 06 21:05:16 calium containerd[2163]: time="2021-07-06T21:05:16.367374585Z" level=info msg="ExecSync for \"a6f120dee40373221a6df913aa6c8754a27034b3b4c84ab42a0a719af6fa16f1\" returns with exit code 0"
Jul 06 21:05:16 calium containerd[2163]: time="2021-07-06T21:05:16.849221925Z" level=info msg="ExecSync for \"abcb8fb5160a8a424ab91eb65129f61ef06503bb693a46d6f2ce03a0b5bbec6d\" with command [/usr/bin/check-status -r] and timeout 1 (s)"
Jul 06 21:05:16 calium containerd[2163]: time="2021-07-06T21:05:16.872211049Z" level=info msg="ExecSync for \"a6f120dee40373221a6df913aa6c8754a27034b3b4c84ab42a0a719af6fa16f1\" with command [/bin/calico-node -felix-live -bird-live] and timeout 1 (s)"
Jul 06 21:05:16 calium containerd[2163]: time="2021-07-06T21:05:16.933335152Z" level=info msg="Exec process \"55848cc9a17ef3f77df8aa0077df0e4ec17fed14a1bb346bd01811731dcd5232\" exits with exit code 0 and error <nil>"
Jul 06 21:05:16 calium containerd[2163]: time="2021-07-06T21:05:16.933618488Z" level=info msg="Finish piping \"stdout\" of container exec \"55848cc9a17ef3f77df8aa0077df0e4ec17fed14a1bb346bd01811731dcd5232\""
Jul 06 21:05:16 calium containerd[2163]: time="2021-07-06T21:05:16.933781105Z" level=info msg="Finish piping \"stderr\" of container exec \"55848cc9a17ef3f77df8aa0077df0e4ec17fed14a1bb346bd01811731dcd5232\""
Jul 06 21:05:16 calium containerd[2163]: time="2021-07-06T21:05:16.935649900Z" level=info msg="ExecSync for \"abcb8fb5160a8a424ab91eb65129f61ef06503bb693a46d6f2ce03a0b5bbec6d\" returns with exit code 0"
Jul 06 21:05:16 calium containerd[2163]: time="2021-07-06T21:05:16.967563964Z" level=info msg="Finish piping \"stderr\" of container exec \"befdd09d855b6289bb95467165355f73623ae1401c472d516cedfc4d02590c12\""
Jul 06 21:05:16 calium containerd[2163]: time="2021-07-06T21:05:16.967715551Z" level=info msg="Finish piping \"stdout\" of container exec \"befdd09d855b6289bb95467165355f73623ae1401c472d516cedfc4d02590c12\""
Jul 06 21:05:16 calium containerd[2163]: time="2021-07-06T21:05:16.967749565Z" level=info msg="Exec process \"befdd09d855b6289bb95467165355f73623ae1401c472d516cedfc4d02590c12\" exits with exit code 0 and error <nil>"
Jul 06 21:05:16 calium containerd[2163]: time="2021-07-06T21:05:16.968890664Z" level=info msg="ExecSync for \"a6f120dee40373221a6df913aa6c8754a27034b3b4c84ab42a0a719af6fa16f1\" returns with exit code 0"
*
* ==> describe nodes <==
* Name: calium
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=calium
kubernetes.io/os=linux
minikube.k8s.io/commit=76d74191d82c47883dc7e1319ef7cebd3e00ee11-dirty
minikube.k8s.io/name=calium
minikube.k8s.io/updated_at=2021_07_06T13_38_26_0700
minikube.k8s.io/version=v1.21.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: io.cilium.network.ipv4-cilium-host: 10.0.2.140
io.cilium.network.ipv4-pod-cidr: 10.0.2.0/24
kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
projectcalico.org/IPv4Address: 192.168.122.10/24
projectcalico.org/IPv4IPIPTunnelAddr: 10.244.238.0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 06 Jul 2021 20:38:21 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: calium
AcquireTime: <unset>
RenewTime: Tue, 06 Jul 2021 21:05:15 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Tue, 06 Jul 2021 20:43:43 +0000 Tue, 06 Jul 2021 20:43:43 +0000 CiliumIsUp Cilium is running on this node
MemoryPressure False Tue, 06 Jul 2021 21:04:15 +0000 Tue, 06 Jul 2021 20:38:19 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 06 Jul 2021 21:04:15 +0000 Tue, 06 Jul 2021 20:38:19 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 06 Jul 2021 21:04:15 +0000 Tue, 06 Jul 2021 20:38:19 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 06 Jul 2021 21:04:15 +0000 Tue, 06 Jul 2021 20:38:56 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.50.9
Hostname: calium
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2550856Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2550856Ki
pods: 110
System Info:
Machine ID: 7c5c6c64083f456a9292fdbd2485f48d
System UUID: 7c5c6c64-083f-456a-9292-fdbd2485f48d
Boot ID: 2796fc48-b5b2-4c86-a2ba-d66bfd89e884
Kernel Version: 4.19.182
OS Image: Buildroot 2020.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.4.4
Kubelet Version: v1.20.7
Kube-Proxy Version: v1.20.7
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (9 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system calico-kube-controllers-55ffdb7658-g2lww 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 26m
kube-system calico-node-b2lql 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 26m
kube-system cilium-xg95x 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 22m
kube-system etcd-calium 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 26m
kube-system kube-apiserver-calium 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 26m
kube-system kube-controller-manager-calium 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 26m
kube-system kube-proxy-sw6g4 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 26m
kube-system kube-scheduler-calium 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 26m
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 26m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1 (50%!)(MISSING) 0 (0%!)(MISSING)
memory 200Mi (8%!)(MISSING) 0 (0%!)(MISSING)
ephemeral-storage 100Mi (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 27m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 27m (x3 over 27m) kubelet Node calium status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 27m (x3 over 27m) kubelet Node calium status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 27m (x3 over 27m) kubelet Node calium status is now: NodeHasNoDiskPressure
Normal NodeAllocatableEnforced 27m kubelet Updated Node Allocatable limit across pods
Normal Starting 26m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 26m kubelet Node calium status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 26m kubelet Node calium status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 26m kubelet Node calium status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 26m kubelet Updated Node Allocatable limit across pods
Normal Starting 26m kube-proxy Starting kube-proxy.
Normal NodeReady 26m kubelet Node calium status is now: NodeReady
Name: calium-m02
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=calium-m02
kubernetes.io/os=linux
Annotations: io.cilium.network.ipv4-cilium-host: 10.0.0.69
io.cilium.network.ipv4-pod-cidr: 10.0.0.0/24
kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
projectcalico.org/IPv4Address: 192.168.122.107/24
projectcalico.org/IPv4IPIPTunnelAddr: 10.244.200.192
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 06 Jul 2021 20:39:12 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: calium-m02
AcquireTime: <unset>
RenewTime: Tue, 06 Jul 2021 21:05:15 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Tue, 06 Jul 2021 20:43:43 +0000 Tue, 06 Jul 2021 20:43:43 +0000 CiliumIsUp Cilium is running on this node
MemoryPressure False Tue, 06 Jul 2021 21:03:55 +0000 Tue, 06 Jul 2021 20:39:12 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 06 Jul 2021 21:03:55 +0000 Tue, 06 Jul 2021 20:39:12 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 06 Jul 2021 21:03:55 +0000 Tue, 06 Jul 2021 20:39:12 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 06 Jul 2021 21:03:55 +0000 Tue, 06 Jul 2021 20:39:23 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.50.51
Hostname: calium-m02
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2550856Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2550856Ki
pods: 110
System Info:
Machine ID: 42d3383e3b1646d889c8e9e93261be24
System UUID: 42d3383e-3b16-46d8-89c8-e9e93261be24
Boot ID: 59ec5cea-801e-44b3-9e8e-c77ae365fadc
Kernel Version: 4.19.182
OS Image: Buildroot 2020.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.4.4
Kubelet Version: v1.20.7
Kube-Proxy Version: v1.20.7
PodCIDR: 10.244.1.0/24
PodCIDRs: 10.244.1.0/24
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default before-5d65d8fb8f-9jx5k 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 24m
kube-system calico-node-t6jmx 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 26m
kube-system cilium-operator-6cfb5cd4c6-69ffw 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 22m
kube-system cilium-qhpb8 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 22m
kube-system coredns-74ff55c5b-sh854 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (2%!)(MISSING) 170Mi (6%!)(MISSING) 14m
kube-system kube-proxy-jgbrc 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 26m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 450m (22%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (6%!)(MISSING) 170Mi (6%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 26m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 26m (x2 over 26m) kubelet Node calium-m02 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 26m (x2 over 26m) kubelet Node calium-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 26m (x2 over 26m) kubelet Node calium-m02 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 26m kubelet Updated Node Allocatable limit across pods
Normal Starting 26m kube-proxy Starting kube-proxy.
Normal NodeReady 25m kubelet Node calium-m02 status is now: NodeReady
Name: calium-m03
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=calium-m03
kubernetes.io/os=linux
Annotations: io.cilium.network.ipv4-cilium-host: 10.0.1.16
io.cilium.network.ipv4-pod-cidr: 10.0.1.0/24
kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
projectcalico.org/IPv4Address: 192.168.122.236/24
projectcalico.org/IPv4IPIPTunnelAddr: 10.244.28.64
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 06 Jul 2021 20:39:49 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: calium-m03
AcquireTime: <unset>
RenewTime: Tue, 06 Jul 2021 21:05:15 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Tue, 06 Jul 2021 20:43:43 +0000 Tue, 06 Jul 2021 20:43:43 +0000 CiliumIsUp Cilium is running on this node
MemoryPressure False Tue, 06 Jul 2021 21:03:55 +0000 Tue, 06 Jul 2021 20:39:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 06 Jul 2021 21:03:55 +0000 Tue, 06 Jul 2021 20:39:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 06 Jul 2021 21:03:55 +0000 Tue, 06 Jul 2021 20:39:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 06 Jul 2021 21:03:55 +0000 Tue, 06 Jul 2021 20:39:59 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.50.221
Hostname: calium-m03
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2550856Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2550856Ki
pods: 110
System Info:
Machine ID: 84c87dcc74984411bbd7deaca1b3680d
System UUID: 84c87dcc-7498-4411-bbd7-deaca1b3680d
Boot ID: 477a29f4-9978-4a12-98d2-fc203456a227
Kernel Version: 4.19.182
OS Image: Buildroot 2020.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.4.4
Kubelet Version: v1.20.7
Kube-Proxy Version: v1.20.7
PodCIDR: 10.244.2.0/24
PodCIDRs: 10.244.2.0/24
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default before-5d65d8fb8f-2k8bl 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 23m
default before-5d65d8fb8f-r7wsx 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 24m
kube-system calico-node-txkld 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 25m
kube-system cilium-kd72k 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 22m
kube-system cilium-operator-6cfb5cd4c6-vqnxt 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 22m
kube-system kube-proxy-4sctv 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 25m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 350m (17%!)(MISSING) 0 (0%!)(MISSING)
memory 100Mi (4%!)(MISSING) 0 (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 25m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 25m (x2 over 25m) kubelet Node calium-m03 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 25m (x2 over 25m) kubelet Node calium-m03 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 25m (x2 over 25m) kubelet Node calium-m03 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 25m kubelet Updated Node Allocatable limit across pods
Normal Starting 25m kube-proxy Starting kube-proxy.
Normal NodeReady 25m kubelet Node calium-m03 status is now: NodeReady
*
* ==> dmesg <==
* [Jul 6 20:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.030527] Decoding supported only on Scalable MCA processors.
[ +2.614021] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +0.894989] systemd-fstab-generator[1156]: Ignoring "noauto" for root device
[ +0.024886] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000000] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +0.463252] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1719 comm=systemd-network
[ +1.044619] vboxguest: loading out-of-tree module taints kernel.
[ +0.003352] vboxguest: PCI device not found, probably running on physical hardware.
[ +2.084440] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[ +12.600798] systemd-fstab-generator[2103]: Ignoring "noauto" for root device
[ +0.238018] systemd-fstab-generator[2152]: Ignoring "noauto" for root device
[Jul 6 20:38] systemd-fstab-generator[2322]: Ignoring "noauto" for root device
[ +20.275436] systemd-fstab-generator[2688]: Ignoring "noauto" for root device
[ +17.557477] kauditd_printk_skb: 38 callbacks suppressed
[Jul 6 20:39] kauditd_printk_skb: 38 callbacks suppressed
[ +12.731706] kauditd_printk_skb: 29 callbacks suppressed
[ +35.366691] NFSD: Unable to end grace period: -110
[Jul 6 20:43] kauditd_printk_skb: 11 callbacks suppressed
[ +5.723302] kauditd_printk_skb: 113 callbacks suppressed
[Jul 6 20:44] kauditd_printk_skb: 26 callbacks suppressed
[ +24.097268] kauditd_printk_skb: 2 callbacks suppressed
[ +12.598142] kauditd_printk_skb: 2 callbacks suppressed
*
* ==> etcd [5241f584ee3f187b54a33991d6e47c7c1dccd788fb1e8e2eeb7be543eed6fc33] <==
* 2021-07-06 20:56:08.490149 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 20:56:18.490212 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 20:56:28.489950 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 20:56:38.490104 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 20:56:48.490083 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 20:56:58.489897 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 20:57:08.489827 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 20:57:18.489876 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 20:57:28.489891 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 20:57:38.490626 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 20:57:48.489960 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 20:57:58.490008 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 20:58:08.489823 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 20:58:18.489486 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 20:58:18.799740 I | mvcc: store.index: compact 2328
2021-07-06 20:58:18.815971 I | mvcc: finished scheduled compaction at 2328 (took 15.472822ms)
2021-07-06 20:58:28.489784 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 20:58:38.489789 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 20:58:48.490160 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 20:58:58.489711 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 20:59:08.490241 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 20:59:18.490026 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 20:59:28.489874 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 20:59:38.490172 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 20:59:48.489749 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 20:59:58.489902 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:00:08.496203 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:00:18.489912 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:00:28.489894 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:00:38.489983 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:00:48.489923 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:00:58.490242 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:01:08.490588 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:01:18.490071 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:01:28.489856 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:01:38.490540 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:01:48.490104 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:01:58.489844 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:02:08.489836 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:02:18.490358 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:02:28.489845 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:02:38.489867 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:02:48.490143 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:02:58.489635 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:03:08.489850 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:03:18.489730 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:03:18.807547 I | mvcc: store.index: compact 2753
2021-07-06 21:03:18.821772 I | mvcc: finished scheduled compaction at 2753 (took 13.371176ms)
2021-07-06 21:03:28.489667 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:03:38.489896 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:03:48.489770 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:03:58.489696 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:04:08.489924 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:04:18.489894 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:04:28.489920 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:04:38.490011 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:04:48.490347 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:04:58.489249 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:05:08.489855 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-07-06 21:05:18.490025 I | etcdserver/api/etcdhttp: /health OK (status code 200)
*
* ==> kernel <==
* 21:05:22 up 27 min, 0 users, load average: 0.34, 0.41, 0.34
Linux calium 4.19.182 #1 SMP Wed Jun 9 00:54:54 UTC 2021 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2020.02.12"
*
* ==> kube-apiserver [739bdc4c8f670a08024a54d377748f80ab8ee58bf20d46102d1798554f673afe] <==
* I0706 20:53:44.409559 1 client.go:360] parsed scheme: "passthrough"
I0706 20:53:44.409637 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0706 20:53:44.409650 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0706 20:54:25.655838 1 client.go:360] parsed scheme: "passthrough"
I0706 20:54:25.656179 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0706 20:54:25.656324 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0706 20:55:05.575349 1 client.go:360] parsed scheme: "passthrough"
I0706 20:55:05.575648 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0706 20:55:05.575765 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0706 20:55:37.289555 1 client.go:360] parsed scheme: "passthrough"
I0706 20:55:37.289616 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0706 20:55:37.289628 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0706 20:56:08.558075 1 client.go:360] parsed scheme: "passthrough"
I0706 20:56:08.558161 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0706 20:56:08.558173 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0706 20:56:42.877935 1 client.go:360] parsed scheme: "passthrough"
I0706 20:56:42.877997 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0706 20:56:42.878010 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0706 20:57:22.131392 1 client.go:360] parsed scheme: "passthrough"
I0706 20:57:22.131473 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0706 20:57:22.131486 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0706 20:57:58.366987 1 client.go:360] parsed scheme: "passthrough"
I0706 20:57:58.367096 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0706 20:57:58.367111 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0706 20:58:29.676088 1 client.go:360] parsed scheme: "passthrough"
I0706 20:58:29.676132 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0706 20:58:29.676141 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0706 20:58:59.878894 1 client.go:360] parsed scheme: "passthrough"
I0706 20:58:59.878985 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0706 20:58:59.879064 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0706 20:59:37.195618 1 client.go:360] parsed scheme: "passthrough"
I0706 20:59:37.195675 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0706 20:59:37.195689 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0706 21:00:12.838138 1 client.go:360] parsed scheme: "passthrough"
I0706 21:00:12.838194 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0706 21:00:12.838206 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0706 21:00:53.523307 1 client.go:360] parsed scheme: "passthrough"
I0706 21:00:53.523365 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0706 21:00:53.523378 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0706 21:01:26.552633 1 client.go:360] parsed scheme: "passthrough"
I0706 21:01:26.552693 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0706 21:01:26.552705 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0706 21:02:10.015674 1 client.go:360] parsed scheme: "passthrough"
I0706 21:02:10.015705 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0706 21:02:10.015711 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0706 21:02:44.911398 1 client.go:360] parsed scheme: "passthrough"
I0706 21:02:44.911813 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0706 21:02:44.912001 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0706 21:03:26.449836 1 client.go:360] parsed scheme: "passthrough"
I0706 21:03:26.449884 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0706 21:03:26.449907 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0706 21:04:07.120102 1 client.go:360] parsed scheme: "passthrough"
I0706 21:04:07.120424 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0706 21:04:07.120446 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0706 21:04:41.499517 1 client.go:360] parsed scheme: "passthrough"
I0706 21:04:41.499879 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0706 21:04:41.500115 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0706 21:05:13.571779 1 client.go:360] parsed scheme: "passthrough"
I0706 21:05:13.572245 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0706 21:05:13.572539 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
*
* ==> kube-controller-manager [bbe2a780c24fae03989367d66ab7c37eda3fe8183207fad6a8c70a4ed9f266a2] <==
* W0706 20:38:50.221992 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0706 20:38:50.222016 1 plugins.go:747] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0706 20:38:50.222211 1 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0706 20:38:50.222256 1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0706 20:38:50.222278 1 plugins.go:747] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
I0706 20:39:01.099702 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
W0706 20:39:12.965259 1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="calium-m02" does not exist
I0706 20:39:13.238852 1 range_allocator.go:373] Set node calium-m02 PodCIDR to [10.244.1.0/24]
I0706 20:39:13.270787 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-jgbrc"
I0706 20:39:13.280746 1 event.go:291] "Event occurred" object="kube-system/calico-node" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: calico-node-t6jmx"
E0706 20:39:13.331074 1 daemon_controller.go:320] kube-system/calico-node failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"calico-node", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"9662a5bf-12a6-41bf-bb89-fb37b47137e9", ResourceVersion:"603", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761200706, loc:(*time.Location)(0x6f9a440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"calico-node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"k8s-app\":\"calico-node\"},\"name\":\"calico-node\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"k8s-app\":\"calico-node\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"k8s-app\":\"calico-node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DATASTORE_TYPE\",\"value\":\"kubernetes\"},{\"name\":\"WAIT_FOR_DATASTORE\",\"value\":\"true\"},{\"name\":\"NODENAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}},{\"name\":\"CALICO_NETWORKING_BACKEND\",\"valueFrom\":{\"configMapKeyRef\":{\"key\":\"calico_backend\",\"name\":\"calico-config\"}}},{\"name\":\"CLUSTER_TYPE\",\"value\":\"k8s,bgp\"},{\"name\":\"IP\",\"value\":\"autodetect\"},{\"name\":\"CALICO_IPV4POOL_IPIP\",\"value\":\"Always\"},{\"name\":\"CALICO_IPV4POOL_VXLAN\",\"value\":\"Never\"},{\"name\":\"FELIX_IPINIPMTU\",\"valueFrom\":{\"configMapKeyRef\":{\"key\":\"veth_mtu\",\"name\":\"calico-config\"}}},{\"name\":\"FELIX_VXLANMTU\",\"valueFrom\":{\"configMapKeyRef\":{\"key\":\"veth_mtu\",\"name\":\"calico-config\"}}},{\"name\":\"CALICO_DISABLE_FILE_LOGGING\",\"value\":\"true\"},{\"name\":\"FELIX_DEFAULTENDPOINTTOHOSTACTION\",\"value\":\"ACCEPT\"},{\"name\":\"FELIX_IPV6SUPPORT\",\"value\":\"false\"},{\"name\":\"FELIX_LOGSEVERITYSCREEN\",\"value\":\"info\"},{\"name\":\"FELIX_HEALTHENABLED\",\"value\":\"true\"},{\"name\":\"IP_AUTODETECTION_METHOD\",\"value\":\"interface=eth.*\"}],\"image\":\"calico/node:v3.14.1\",\"livenessProbe\":{\"exec\":{\"command\":[\"/bin/calico-node\",\"-felix-live\",\"-bird-live\"]},\"failureThreshold\":6,\"initialDelaySeconds\":10,\"periodSeconds\":10},\"name\":\"calico-node\",\"readinessProbe\":{\"exec\":{\"command\":[\"/bin/calico-node\",\"-felix-ready\",\"-bird-ready\"]},\"periodSeconds\":10},\"resources\":{\"requests\":{\"cpu\":\"250m\"}},\"securityContext\":{\"privileged\":true},\"volumeMounts\":[{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/var/run/calico\",\"name\":\"var-run-calico\",\"readOnly\":false},{\"mountPath\":\"/var/lib/calico\",\"name\":\"var-lib-calico\",\"readOnly\":false},{\"mountPath\":\"/var/run/nodeagent\",\"name\":\"policysync\"}]}],\"hostNetwork\":true,\"initContainers\":[{\"command\":[\"/opt/cni/bin/calico-ipam\",\"-upgrade\"],\"env\":[{\"name\":\"KUBERNETES_NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}},{\"name\":\"CALICO_NETWORKING_BACKEND\",\"valueFrom\":{\"configMapKeyRef\":{\"key\":\"calico_backend\",\"name\":\"calico-config\"}}}],\"image\":\"calico/cni:v3.14.1\",\"name\":\"upgrade-ipam\",\"securityContext\":{\"privileged\":true},\"volumeMounts\":[{\"mountPath\":\"/var/lib/cni/networks\",\"name\":\"host-local-net-dir\"},{\"mountPath\":\"/host/opt/cni/bin\",\"name\":\"cni-bin-dir\"}]},{\"command\":[\"/install-cni.sh\"],\"env\":[{\"name\":\"CNI_CONF_NAME\",\"value\":\"10-calico.conflist\"},{\"name\":\"CNI_NETWORK_CONFIG\",\"valueFrom\":{\"configMapKeyRef\":{\"key\":\"cni_network_config\",\"name\":\"calico-config\"}}},{\"name\":\"KUBERNETES_NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}},{\"name\":\"CNI_MTU\",\"valueFrom\":{\"configMapKeyRef\":{\"key\":\"veth_mtu\",\"name\":\"calico-config\"}}},{\"name\":\"SLEEP\",\"value\":\"false\"}],\"image\":\"calico/cni:v3.14.1\",\"name\":\"install-cni\",\"securityContext\":{\"privileged\":true},\"volumeMounts\":[{\"mountPath\":\"/host/opt/cni/bin\",\"name\":\"cni-bin-dir\"},{\"mountPath\":\"/host/etc/cni/net.d\",\"name\":\"cni-net-dir\"}]},{\"image\":\"calico/pod2daemon-flexvol:v3.14.1\",\"name\":\"flexvol-driver\",\"securityContext\":{\"privileged\":true},\"volumeMounts\":[{\"mountPath\":\"/host/driver\",\"name\":\"flexvol-driver-host\"}]}],\"nodeSelector\":{\"kubernetes.io/os\":\"linux\"},\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"calico-node\",\"terminationGracePeriodSeconds\":0,\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"},{\"key\":\"CriticalAddonsOnly\",\"operator\":\"Exists\"},{\"effect\":\"NoExecute\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"},{\"hostPath\":{\"path\":\"/var/run/calico\"},\"name\":\"var-run-calico\"},{\"hostPath\":{\"path\":\"/var/lib/calico\"},\"name\":\"var-lib-calico\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/opt/cni/bin\"},\"name\":\"cni-bin-dir\"},{\"hostPath\":{\"path\":\"/etc/cni/net.d\"},\"name\":\"cni-net-dir\"},{\"hostPath\":{\"path\":\"/var/lib/cni/networks\"},\"name\":\"host-local-net-dir\"},{\"hostPath\":{\"path\":\"/var/run/nodeagent\",\"type\":\"DirectoryOrCreate\"},\"name\":\"policysync\"},{\"hostPath\":{\"path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds\",\"type\":\"DirectoryOrCreate\"},\"name\":\"flexvol-driver-host\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":1},\"type\":\"RollingUpdate\"}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc002385660), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002385680)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0023856a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0023856c0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0023856e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"calico-node"}, Annotations:map[string]string{"scheduler.alpha.kubernetes.io/critical-pod":""}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002385700), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"var-run-calico", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002385720), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"var-lib-calico", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002385740), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002385760), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"cni-bin-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002385780), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"cni-net-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0023857a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"host-local-net-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0023857c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"policysync", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0023857e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"flexvol-driver-host", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002385800), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"upgrade-ipam", Image:"calico/cni:v3.14.1", Command:[]string{"/opt/cni/bin/calico-ipam", "-upgrade"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"KUBERNETES_NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc002385960)}, v1.EnvVar{Name:"CALICO_NETWORKING_BACKEND", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0023859a0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"host-local-net-dir", ReadOnly:false, MountPath:"/var/lib/cni/networks", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"cni-bin-dir", ReadOnly:false, MountPath:"/host/opt/cni/bin", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0023872c0), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"install-cni", Image:"calico/cni:v3.14.1", Command:[]string{"/install-cni.sh"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"CNI_CONF_NAME", Value:"10-calico.conflist", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CNI_NETWORK_CONFIG", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0023859c0)}, v1.EnvVar{Name:"KUBERNETES_NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0023859e0)}, v1.EnvVar{Name:"CNI_MTU", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc002385a20)}, v1.EnvVar{Name:"SLEEP", Value:"false", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-bin-dir", ReadOnly:false, MountPath:"/host/opt/cni/bin", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"cni-net-dir", ReadOnly:false, MountPath:"/host/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc002387320), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"flexvol-driver", Image:"calico/pod2daemon-flexvol:v3.14.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"flexvol-driver-host", ReadOnly:false, MountPath:"/host/driver", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc002387380), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"calico-node", Image:"calico/node:v3.14.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"DATASTORE_TYPE", Value:"kubernetes", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"WAIT_FOR_DATASTORE", Value:"true", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"NODENAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc002385820)}, v1.EnvVar{Name:"CALICO_NETWORKING_BACKEND", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc002385860)}, v1.EnvVar{Name:"CLUSTER_TYPE", Value:"k8s,bgp", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"IP", Value:"autodetect", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_IPV4POOL_IPIP", Value:"Always", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_IPV4POOL_VXLAN", Value:"Never", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_IPINIPMTU", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc002385880)}, v1.EnvVar{Name:"FELIX_VXLANMTU", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0023858a0)}, v1.EnvVar{Name:"CALICO_DISABLE_FILE_LOGGING", Value:"true", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_DEFAULTENDPOINTTOHOSTACTION", Value:"ACCEPT", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_IPV6SUPPORT", Value:"false", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_LOGSEVERITYSCREEN", Value:"info", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_HEALTHENABLED", Value:"true", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"IP_AUTODETECTION_METHOD", Value:"interface=eth.*", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:250, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"250m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"var-run-calico", ReadOnly:false, MountPath:"/var/run/calico", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"var-lib-calico", ReadOnly:false, MountPath:"/var/lib/calico", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"policysync", ReadOnly:false, MountPath:"/var/run/nodeagent", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc002398c00), ReadinessProbe:(*v1.Probe)(0xc002398c30), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc002387260), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002337ee8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"calico-node", DeprecatedServiceAccount:"calico-node", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00056aa10), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001e59108)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0023be0c0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:1, NumberAvailable:1, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "calico-node": the object has been modified; please apply your changes to the latest version and try again
W0706 20:39:16.101006 1 node_lifecycle_controller.go:1044] Missing timestamp for Node calium-m02. Assuming now as a timestamp.
I0706 20:39:16.101166 1 event.go:291] "Event occurred" object="calium-m02" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node calium-m02 event: Registered Node calium-m02 in Controller"
W0706 20:39:49.198560 1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="calium-m03" does not exist
I0706 20:39:49.472402 1 range_allocator.go:373] Set node calium-m03 PodCIDR to [10.244.2.0/24]
I0706 20:39:49.495846 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4sctv"
I0706 20:39:49.496001 1 event.go:291] "Event occurred" object="kube-system/calico-node" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: calico-node-txkld"
E0706 20:39:49.586614 1 daemon_controller.go:320] kube-system/calico-node failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"calico-node", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"9662a5bf-12a6-41bf-bb89-fb37b47137e9", ResourceVersion:"739", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761200706, loc:(*time.Location)(0x6f9a440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"calico-node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"k8s-app\":\"calico-node\"},\"name\":\"calico-node\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"k8s-app\":\"calico-node\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"k8s-app\":\"calico-node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DATASTORE_TYPE\",\"value\":\"kubernetes\"},{\"name\":\"WAIT_FOR_DATASTORE\",\"value\":\"true\"},{\"name\":\"NODENAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}},{\"name\":\"CALICO_NETWORKING_BACKEND\",\"valueFrom\":{\"configMapKeyRef\":{\"key\":\"calico_backend\",\"name\":\"calico-config\"}}},{\"name\":\"CLUSTER_TYPE\",\"value\":\"k8s,bgp\"},{\"name\":\"IP\",\"value\":\"autodetect\"},{\"name\":\"CALICO_IPV4POOL_IPIP\",\"value\":\"Always\"},{\"name\":\"CALICO_IPV4POOL_VXLAN\",\"value\":\"Never\"},{\"name\":\"FELIX_IPINIPMTU\",\"valueFrom\":{\"configMapKeyRef\":{\"key\":\"veth_mtu\",\"name\":\"calico-config\"}}},{\"name\":\"FELIX_VXLANMTU\",\"valueFrom\":{\"configMapKeyRef\":{\"key\":\"veth_mtu\",\"name\":\"calico-config\"}}},{\"name\":\"CALICO_DISABLE_FILE_LOGGING\",\"value\":\"true\"},{\"name\":\"FELIX_DEFAULTENDPOINTTOHOSTACTION\",\"value\":\"ACCEPT\"},{\"name\":\"FELIX_IPV6SUPPORT\",\"value\":\"false\"},{\"name\":\"FELIX_LOGSEVERITYSCREEN\",\"value\":\"info\"},{\"name\":\"FELIX_HEALTHENABLED\",\"value\":\"true\"},{\"name\":\"IP_AUTODETECTION_METHOD\",\"value\":\"interface=eth.*\"}],\"image\":\"calico/node:v3.14.1\",\"livenessProbe\":{\"exec\":{\"command\":[\"/bin/calico-node\",\"-felix-live\",\"-bird-live\"]},\"failureThreshold\":6,\"initialDelaySeconds\":10,\"periodSeconds\":10},\"name\":\"calico-node\",\"readinessProbe\":{\"exec\":{\"command\":[\"/bin/calico-node\",\"-felix-ready\",\"-bird-ready\"]},\"periodSeconds\":10},\"resources\":{\"requests\":{\"cpu\":\"250m\"}},\"securityContext\":{\"privileged\":true},\"volumeMounts\":[{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/var/run/calico\",\"name\":\"var-run-calico\",\"readOnly\":false},{\"mountPath\":\"/var/lib/calico\",\"name\":\"var-lib-calico\",\"readOnly\":false},{\"mountPath\":\"/var/run/nodeagent\",\"name\":\"policysync\"}]}],\"hostNetwork\":true,\"initContainers\":[{\"command\":[\"/opt/cni/bin/calico-ipam\",\"-upgrade\"],\"env\":[{\"name\":\"KUBERNETES_NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}},{\"name\":\"CALICO_NETWORKING_BACKEND\",\"valueFrom\":{\"configMapKeyRef\":{\"key\":\"calico_backend\",\"name\":\"calico-config\"}}}],\"image\":\"calico/cni:v3.14.1\",\"name\":\"upgrade-ipam\",\"securityContext\":{\"privileged\":true},\"volumeMounts\":[{\"mountPath\":\"/var/lib/cni/networks\",\"name\":\"host-local-net-dir\"},{\"mountPath\":\"/host/opt/cni/bin\",\"name\":\"cni-bin-dir\"}]},{\"command\":[\"/install-cni.sh\"],\"env\":[{\"name\":\"CNI_CONF_NAME\",\"value\":\"10-calico.conflist\"},{\"name\":\"CNI_NETWORK_CONFIG\",\"valueFrom\":{\"configMapKeyRef\":{\"key\":\"cni_network_config\",\"name\":\"calico-config\"}}},{\"name\":\"KUBERNETES_NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}},{\"name\":\"CNI_MTU\",\"valueFrom\":{\"configMapKeyRef\":{\"key\":\"veth_mtu\",\"name\":\"calico-config\"}}},{\"name\":\"SLEEP\",\"value\":\"false\"}],\"image\":\"calico/cni:v3.14.1\",\"name\":\"install-cni\",\"securityContext\":{\"privileged\":true},\"volumeMounts\":[{\"mountPath\":\"/host/opt/cni/bin\",\"name\":\"cni-bin-dir\"},{\"mountPath\":\"/host/etc/cni/net.d\",\"name\":\"cni-net-dir\"}]},{\"image\":\"calico/pod2daemon-flexvol:v3.14.1\",\"name\":\"flexvol-driver\",\"securityContext\":{\"privileged\":true},\"volumeMounts\":[{\"mountPath\":\"/host/driver\",\"name\":\"flexvol-driver-host\"}]}],\"nodeSelector\":{\"kubernetes.io/os\":\"linux\"},\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"calico-node\",\"terminationGracePeriodSeconds\":0,\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"},{\"key\":\"CriticalAddonsOnly\",\"operator\":\"Exists\"},{\"effect\":\"NoExecute\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"},{\"hostPath\":{\"path\":\"/var/run/calico\"},\"name\":\"var-run-calico\"},{\"hostPath\":{\"path\":\"/var/lib/calico\"},\"name\":\"var-lib-calico\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/opt/cni/bin\"},\"name\":\"cni-bin-dir\"},{\"hostPath\":{\"path\":\"/etc/cni/net.d\"},\"name\":\"cni-net-dir\"},{\"hostPath\":{\"path\":\"/var/lib/cni/networks\"},\"name\":\"host-local-net-dir\"},{\"hostPath\":{\"path\":\"/var/run/nodeagent\",\"type\":\"DirectoryOrCreate\"},\"name\":\"policysync\"},{\"hostPath\":{\"path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds\",\"type\":\"DirectoryOrCreate\"},\"name\":\"flexvol-driver-host\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":1},\"type\":\"RollingUpdate\"}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc002698060), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002698080)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0026980a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0026980c0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0026980e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"calico-node"}, Annotations:map[string]string{"scheduler.alpha.kubernetes.io/critical-pod":""}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002698100), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"var-run-calico", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002698120), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"var-lib-calico", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002698140), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002698160), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"cni-bin-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002698180), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"cni-net-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0026981a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"host-local-net-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0026981c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"policysync", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0026981e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"flexvol-driver-host", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002698200), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"upgrade-ipam", Image:"calico/cni:v3.14.1", Command:[]string{"/opt/cni/bin/calico-ipam", "-upgrade"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"KUBERNETES_NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc002698360)}, v1.EnvVar{Name:"CALICO_NETWORKING_BACKEND", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0026983a0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"host-local-net-dir", ReadOnly:false, MountPath:"/var/lib/cni/networks", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"cni-bin-dir", ReadOnly:false, MountPath:"/host/opt/cni/bin", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0024bc4e0), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"install-cni", Image:"calico/cni:v3.14.1", Command:[]string{"/install-cni.sh"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"CNI_CONF_NAME", Value:"10-calico.conflist", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CNI_NETWORK_CONFIG", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0026983c0)}, v1.EnvVar{Name:"KUBERNETES_NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0026983e0)}, v1.EnvVar{Name:"CNI_MTU", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc002698420)}, v1.EnvVar{Name:"SLEEP", Value:"false", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-bin-dir", ReadOnly:false, MountPath:"/host/opt/cni/bin", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"cni-net-dir", ReadOnly:false, MountPath:"/host/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0024bc540), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"flexvol-driver", Image:"calico/pod2daemon-flexvol:v3.14.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"flexvol-driver-host", ReadOnly:false, MountPath:"/host/driver", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0024bc5a0), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"calico-node", Image:"calico/node:v3.14.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"DATASTORE_TYPE", Value:"kubernetes", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"WAIT_FOR_DATASTORE", Value:"true", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"NODENAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc002698220)}, v1.EnvVar{Name:"CALICO_NETWORKING_BACKEND", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc002698260)}, v1.EnvVar{Name:"CLUSTER_TYPE", Value:"k8s,bgp", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"IP", Value:"autodetect", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_IPV4POOL_IPIP", Value:"Always", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_IPV4POOL_VXLAN", Value:"Never", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_IPINIPMTU", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc002698280)}, v1.EnvVar{Name:"FELIX_VXLANMTU", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0026982a0)}, v1.EnvVar{Name:"CALICO_DISABLE_FILE_LOGGING", Value:"true", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_DEFAULTENDPOINTTOHOSTACTION", Value:"ACCEPT", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_IPV6SUPPORT", Value:"false", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_LOGSEVERITYSCREEN", Value:"info", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_HEALTHENABLED", Value:"true", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"IP_AUTODETECTION_METHOD", Value:"interface=eth.*", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:250, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"250m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"var-run-calico", ReadOnly:false, MountPath:"/var/run/calico", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"var-lib-calico", ReadOnly:false, MountPath:"/var/lib/calico", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"policysync", ReadOnly:false, MountPath:"/var/run/nodeagent", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc0025d91d0), ReadinessProbe:(*v1.Probe)(0xc0025d9200), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0024bc480), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0026929c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"calico-node", DeprecatedServiceAccount:"calico-node", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000547650), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001e591d0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002692ba0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:2, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:2, ObservedGeneration:1, UpdatedNumberScheduled:2, NumberAvailable:2, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "calico-node": the object has been modified; please apply your changes to the latest version and try again
E0706 20:39:49.627922 1 daemon_controller.go:320] kube-system/calico-node failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"calico-node", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"9662a5bf-12a6-41bf-bb89-fb37b47137e9", ResourceVersion:"758", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761200706, loc:(*time.Location)(0x6f9a440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"calico-node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"k8s-app\":\"calico-node\"},\"name\":\"calico-node\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"k8s-app\":\"calico-node\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"k8s-app\":\"calico-node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DATASTORE_TYPE\",\"value\":\"kubernetes\"},{\"name\":\"WAIT_FOR_DATASTORE\",\"value\":\"true\"},{\"name\":\"NODENAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}},{\"name\":\"CALICO_NETWORKING_BACKEND\",\"valueFrom\":{\"configMapKeyRef\":{\"key\":\"calico_backend\",\"name\":\"calico-config\"}}},{\"name\":\"CLUSTER_TYPE\",\"value\":\"k8s,bgp\"},{\"name\":\"IP\",\"value\":\"autodetect\"},{\"name\":\"CALICO_IPV4POOL_IPIP\",\"value\":\"Always\"},{\"name\":\"CALICO_IPV4POOL_VXLAN\",\"value\":\"Never\"},{\"name\":\"FELIX_IPINIPMTU\",\"valueFrom\":{\"configMapKeyRef\":{\"key\":\"veth_mtu\",\"name\":\"calico-config\"}}},{\"name\":\"FELIX_VXLANMTU\",\"valueFrom\":{\"configMapKeyRef\":{\"key\":\"veth_mtu\",\"name\":\"calico-config\"}}},{\"name\":\"CALICO_DISABLE_FILE_LOGGING\",\"value\":\"true\"},{\"name\":\"FELIX_DEFAULTENDPOINTTOHOSTACTION\",\"value\":\"ACCEPT\"},{\"name\":\"FELIX_IPV6SUPPORT\",\"value\":\"false\"},{\"name\":\"FELIX_LOGSEVERITYSCREEN\",\"value\":\"info\"},{\"name\":\"FELIX_HEALTHENABLED\",\"value\":\"true\"},{\"name\":\"IP_AUTODETECTION_METHOD\",\"value\":\"interface=eth.*\"}],\"image\":\"calico/node:v3.14.1\",\"livenessProbe\":{\"exec\":{\"command\":[\"/bin/calico-node\",\"-felix-live\",\"-bird-live\"]},\"failureThreshold\":6,\"initialDelaySeconds\":10,\"periodSeconds\":10},\"name\":\"calico-node\",\"readinessProbe\":{\"exec\":{\"command\":[\"/bin/calico-node\",\"-felix-ready\",\"-bird-ready\"]},\"periodSeconds\":10},\"resources\":{\"requests\":{\"cpu\":\"250m\"}},\"securityContext\":{\"privileged\":true},\"volumeMounts\":[{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/var/run/calico\",\"name\":\"var-run-calico\",\"readOnly\":false},{\"mountPath\":\"/var/lib/calico\",\"name\":\"var-lib-calico\",\"readOnly\":false},{\"mountPath\":\"/var/run/nodeagent\",\"name\":\"policysync\"}]}],\"hostNetwork\":true,\"initContainers\":[{\"command\":[\"/opt/cni/bin/calico-ipam\",\"-upgrade\"],\"env\":[{\"name\":\"KUBERNETES_NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}},{\"name\":\"CALICO_NETWORKING_BACKEND\",\"valueFrom\":{\"configMapKeyRef\":{\"key\":\"calico_backend\",\"name\":\"calico-config\"}}}],\"image\":\"calico/cni:v3.14.1\",\"name\":\"upgrade-ipam\",\"securityContext\":{\"privileged\":true},\"volumeMounts\":[{\"mountPath\":\"/var/lib/cni/networks\",\"name\":\"host-local-net-dir\"},{\"mountPath\":\"/host/opt/cni/bin\",\"name\":\"cni-bin-dir\"}]},{\"command\":[\"/install-cni.sh\"],\"env\":[{\"name\":\"CNI_CONF_NAME\",\"value\":\"10-calico.conflist\"},{\"name\":\"CNI_NETWORK_CONFIG\",\"valueFrom\":{\"configMapKeyRef\":{\"key\":\"cni_network_config\",\"name\":\"calico-config\"}}},{\"name\":\"KUBERNETES_NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}},{\"name\":\"CNI_MTU\",\"valueFrom\":{\"configMapKeyRef\":{\"key\":\"veth_mtu\",\"name\":\"calico-config\"}}},{\"name\":\"SLEEP\",\"value\":\"false\"}],\"image\":\"calico/cni:v3.14.1\",\"name\":\"install-cni\",\"securityContext\":{\"privileged\":true},\"volumeMounts\":[{\"mountPath\":\"/host/opt/cni/bin\",\"name\":\"cni-bin-dir\"},{\"mountPath\":\"/host/etc/cni/net.d\",\"name\":\"cni-net-dir\"}]},{\"image\":\"calico/pod2daemon-flexvol:v3.14.1\",\"name\":\"flexvol-driver\",\"securityContext\":{\"privileged\":true},\"volumeMounts\":[{\"mountPath\":\"/host/driver\",\"name\":\"flexvol-driver-host\"}]}],\"nodeSelector\":{\"kubernetes.io/os\":\"linux\"},\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"calico-node\",\"terminationGracePeriodSeconds\":0,\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"},{\"key\":\"CriticalAddonsOnly\",\"operator\":\"Exists\"},{\"effect\":\"NoExecute\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"},{\"hostPath\":{\"path\":\"/var/run/calico\"},\"name\":\"var-run-calico\"},{\"hostPath\":{\"path\":\"/var/lib/calico\"},\"name\":\"var-lib-calico\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/opt/cni/bin\"},\"name\":\"cni-bin-dir\"},{\"hostPath\":{\"path\":\"/etc/cni/net.d\"},\"name\":\"cni-net-dir\"},{\"hostPath\":{\"path\":\"/var/lib/cni/networks\"},\"name\":\"host-local-net-dir\"},{\"hostPath\":{\"path\":\"/var/run/nodeagent\",\"type\":\"DirectoryOrCreate\"},\"name\":\"policysync\"},{\"hostPath\":{\"path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds\",\"type\":\"DirectoryOrCreate\"},\"name\":\"flexvol-driver-host\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":1},\"type\":\"RollingUpdate\"}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00289d720), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00289d740)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00289d760), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00289d780)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00289d7a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"calico-node"}, Annotations:map[string]string{"scheduler.alpha.kubernetes.io/critical-pod":""}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00289d7c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"var-run-calico", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00289d7e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"var-lib-calico", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00289d800), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00289d820), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"cni-bin-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00289d840), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"cni-net-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00289d860), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"host-local-net-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00289d880), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"policysync", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00289d8a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"flexvol-driver-host", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00289d8c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"upgrade-ipam", Image:"calico/cni:v3.14.1", Command:[]string{"/opt/cni/bin/calico-ipam", "-upgrade"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"KUBERNETES_NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00289da20)}, v1.EnvVar{Name:"CALICO_NETWORKING_BACKEND", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00289da60)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"host-local-net-dir", ReadOnly:false, MountPath:"/var/lib/cni/networks", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"cni-bin-dir", ReadOnly:false, MountPath:"/host/opt/cni/bin", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0027163c0), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"install-cni", Image:"calico/cni:v3.14.1", Command:[]string{"/install-cni.sh"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"CNI_CONF_NAME", Value:"10-calico.conflist", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CNI_NETWORK_CONFIG", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00289da80)}, v1.EnvVar{Name:"KUBERNETES_NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00289daa0)}, v1.EnvVar{Name:"CNI_MTU", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00289dae0)}, v1.EnvVar{Name:"SLEEP", Value:"false", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-bin-dir", ReadOnly:false, MountPath:"/host/opt/cni/bin", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"cni-net-dir", ReadOnly:false, MountPath:"/host/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc002716420), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"flexvol-driver", Image:"calico/pod2daemon-flexvol:v3.14.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"flexvol-driver-host", ReadOnly:false, MountPath:"/host/driver", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc002716480), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"calico-node", Image:"calico/node:v3.14.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"DATASTORE_TYPE", Value:"kubernetes", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"WAIT_FOR_DATASTORE", Value:"true", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"NODENAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00289d8e0)}, v1.EnvVar{Name:"CALICO_NETWORKING_BACKEND", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00289d920)}, v1.EnvVar{Name:"CLUSTER_TYPE", Value:"k8s,bgp", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"IP", Value:"autodetect", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_IPV4POOL_IPIP", Value:"Always", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_IPV4POOL_VXLAN", Value:"Never", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_IPINIPMTU", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00289d940)}, v1.EnvVar{Name:"FELIX_VXLANMTU", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00289d960)}, v1.EnvVar{Name:"CALICO_DISABLE_FILE_LOGGING", Value:"true", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_DEFAULTENDPOINTTOHOSTACTION", Value:"ACCEPT", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_IPV6SUPPORT", Value:"false", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_LOGSEVERITYSCREEN", Value:"info", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_HEALTHENABLED", Value:"true", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"IP_AUTODETECTION_METHOD", Value:"interface=eth.*", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:250, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"250m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"var-run-calico", ReadOnly:false, MountPath:"/var/run/calico", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"var-lib-calico", ReadOnly:false, MountPath:"/var/lib/calico", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"policysync", ReadOnly:false, MountPath:"/var/run/nodeagent", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc00288d380), ReadinessProbe:(*v1.Probe)(0xc00288d3b0), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc002716360), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0025cc188), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"calico-node", DeprecatedServiceAccount:"calico-node", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0005502a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0006ee038)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0025cc360)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:2, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:2, ObservedGeneration:1, UpdatedNumberScheduled:2, NumberAvailable:2, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "calico-node": the object has been modified; please apply your changes to the latest version and try again
W0706 20:39:51.131836 1 node_lifecycle_controller.go:1044] Missing timestamp for Node calium-m03. Assuming now as a timestamp.
I0706 20:39:51.132492 1 event.go:291] "Event occurred" object="calium-m03" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node calium-m03 event: Registered Node calium-m03 in Controller"
I0706 20:40:33.863607 1 event.go:291] "Event occurred" object="default/before" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set before-78db8fbb5f to 3"
I0706 20:40:33.891557 1 event.go:291] "Event occurred" object="default/before-78db8fbb5f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: before-78db8fbb5f-hsdhx"
I0706 20:40:33.911119 1 event.go:291] "Event occurred" object="default/before-78db8fbb5f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: before-78db8fbb5f-9f747"
I0706 20:40:33.934326 1 event.go:291] "Event occurred" object="default/before-78db8fbb5f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: before-78db8fbb5f-gc5dt"
I0706 20:41:17.184196 1 event.go:291] "Event occurred" object="default/before" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set before-5d65d8fb8f to 1"
I0706 20:41:17.199215 1 event.go:291] "Event occurred" object="default/before-5d65d8fb8f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: before-5d65d8fb8f-r7wsx"
I0706 20:41:21.464972 1 event.go:291] "Event occurred" object="default/before" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set before-78db8fbb5f to 2"
I0706 20:41:21.487529 1 event.go:291] "Event occurred" object="default/before-78db8fbb5f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: before-78db8fbb5f-hsdhx"
I0706 20:41:21.488888 1 event.go:291] "Event occurred" object="default/before" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set before-5d65d8fb8f to 2"
I0706 20:41:21.516530 1 event.go:291] "Event occurred" object="default/before-5d65d8fb8f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: before-5d65d8fb8f-9jx5k"
I0706 20:41:25.210888 1 event.go:291] "Event occurred" object="default/before" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set before-78db8fbb5f to 1"
I0706 20:41:25.223667 1 event.go:291] "Event occurred" object="default/before" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set before-5d65d8fb8f to 3"
I0706 20:41:25.223955 1 event.go:291] "Event occurred" object="default/before-78db8fbb5f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: before-78db8fbb5f-9f747"
I0706 20:41:25.236591 1 event.go:291] "Event occurred" object="default/before-5d65d8fb8f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: before-5d65d8fb8f-2k8bl"
I0706 20:41:27.449402 1 event.go:291] "Event occurred" object="default/before" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set before-78db8fbb5f to 0"
I0706 20:41:27.458155 1 event.go:291] "Event occurred" object="default/before-78db8fbb5f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: before-78db8fbb5f-gc5dt"
I0706 20:43:20.385032 1 event.go:291] "Event occurred" object="kube-system/cilium-operator" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set cilium-operator-6cfb5cd4c6 to 2"
I0706 20:43:20.394392 1 event.go:291] "Event occurred" object="kube-system/cilium-operator-6cfb5cd4c6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cilium-operator-6cfb5cd4c6-69ffw"
I0706 20:43:20.402571 1 event.go:291] "Event occurred" object="kube-system/cilium" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cilium-qhpb8"
I0706 20:43:20.413321 1 event.go:291] "Event occurred" object="kube-system/cilium-operator-6cfb5cd4c6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cilium-operator-6cfb5cd4c6-vqnxt"
I0706 20:43:20.422838 1 event.go:291] "Event occurred" object="kube-system/cilium" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cilium-kd72k"
I0706 20:43:20.423372 1 event.go:291] "Event occurred" object="kube-system/cilium" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cilium-xg95x"
E0706 20:43:20.522894 1 daemon_controller.go:320] kube-system/cilium failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"cilium", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"790a8c71-d277-4deb-a4aa-246bde2dd5bc", ResourceVersion:"1224", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761201000, loc:(*time.Location)(0x6f9a440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"cilium"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"k8s-app\":\"cilium\"},\"name\":\"cilium\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"k8s-app\":\"cilium\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"k8s-app\":\"cilium\"}},\"spec\":{\"affinity\":{\"nodeAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":{\"nodeSelectorTerms\":[{\"matchExpressions\":[{\"key\":\"kubernetes.io/os\",\"operator\":\"In\",\"values\":[\"linux\"]}]},{\"matchExpressions\":[{\"key\":\"beta.kubernetes.io/os\",\"operator\":\"In\",\"values\":[\"linux\"]}]}]}},\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"k8s-app\",\"operator\":\"In\",\"values\":[\"cilium\"]}]},\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"args\":[\"--config-dir=/tmp/cilium/config-map\"],\"command\":[\"cilium-agent\"],\"env\":[{\"name\":\"K8S_NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"apiVersion\":\"v1\",\"fieldPath\":\"spec.nodeName\"}}},{\"name\":\"CILIUM_K8S_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"apiVersion\":\"v1\",\"fieldPath\":\"metadata.namespace\"}}},{\"name\":\"CILIUM_CLUSTERMESH_CONFIG\",\"value\":\"/var/lib/cilium/clustermesh/\"},{\"name\":\"CILIUM_CNI_CHAINING_MODE\",\"valueFrom\":{\"configMapKeyRef\":{\"key\":\"cni-chaining-mode\",\"name\":\"cilium-config\",\"optional\":true}}},{\"name\":\"CILIUM_CUSTOM_CNI_CONF\",\"valueFrom\":{\"configMapKeyRef\":{\"key\":\"custom-cni-conf\",\"name\":\"cilium-config\",\"optional\":true}}}],\"image\":\"quay.io/cilium/cilium:v1.10.2@sha256:1112a29c8fe04c2a47e5e250112a940c9b81d6700b7e8bba159ab996a05282b9\",\"imagePullPolicy\":\"IfNotPresent\",\"lifecycle\":{\"postStart\":{\"exec\":{\"command\":[\"/cni-install.sh\",\"--enable-debug=false\",\"--cni-exclusive=true\"]}},\"preStop\":{\"exec\":{\"command\":[\"/cni-uninstall.sh\"]}}},\"livenessProbe\":{\"failureThreshold\":10,\"httpGet\":{\"host\":\"127.0.0.1\",\"httpHeaders\":[{\"name\":\"brief\",\"value\":\"true\"}],\"path\":\"/healthz\",\"port\":9876,\"scheme\":\"HTTP\"},\"periodSeconds\":30,\"successThreshold\":1,\"timeoutSeconds\":5},\"name\":\"cilium-agent\",\"readinessProbe\":{\"failureThreshold\":3,\"httpGet\":{\"host\":\"127.0.0.1\",\"httpHeaders\":[{\"name\":\"brief\",\"value\":\"true\"}],\"path\":\"/healthz\",\"port\":9876,\"scheme\":\"HTTP\"},\"periodSeconds\":30,\"successThreshold\":1,\"timeoutSeconds\":5},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_ADMIN\",\"SYS_MODULE\"]},\"privileged\":true},\"startupProbe\":{\"failureThreshold\":24,\"httpGet\":{\"host\":\"127.0.0.1\",\"httpHeaders\":[{\"name\":\"brief\",\"value\":\"true\"}],\"path\":\"/healthz\",\"port\":9876,\"scheme\":\"HTTP\"},\"periodSeconds\":2,\"successThreshold\":1},\"volumeMounts\":[{\"mountPath\":\"/sys/fs/bpf\",\"name\":\"bpf-maps\"},{\"mountPath\":\"/var/run/cilium\",\"name\":\"cilium-run\"},{\"mountPath\":\"/host/opt/cni/bin\",\"name\":\"cni-path\"},{\"mountPath\":\"/host/etc/cni/net.d\",\"name\":\"etc-cni-netd\"},{\"mountPath\":\"/var/lib/cilium/clustermesh\",\"name\":\"clustermesh-secrets\",\"readOnly\":true},{\"mountPath\":\"/tmp/cilium/config-map\",\"name\":\"cilium-config-path\",\"readOnly\":true},{\"mountPath\":\"/tmp/cni-configuration\",\"name\":\"cni-configuration\",\"readOnly\":true},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\"},{\"mountPath\":\"/var/lib/cilium/tls/hubble\",\"name\":\"hubble-tls\",\"readOnly\":true}]}],\"hostNetwork\":true,\"initContainers\":[{\"command\":[\"nsenter\",\"--cgroup=/hostpid1ns/cgroup\",\"--mount=/hostpid1ns/mnt\",\"--\",\"sh\",\"-c\",\"mount | grep \\\"$CGROUP_ROOT type cgroup2\\\" || { echo \\\"Mounting cgroup filesystem...\\\"; mount -t cgroup2 none $CGROUP_ROOT; }\"],\"env\":[{\"name\":\"CGROUP_ROOT\",\"value\":\"/run/cilium/cgroupv2\"}],\"image\":\"quay.io/cilium/cilium:v1.10.2@sha256:1112a29c8fe04c2a47e5e250112a940c9b81d6700b7e8bba159ab996a05282b9\",\"name\":\"mount-cgroup\",\"securityContext\":{\"privileged\":true},\"volumeMounts\":[{\"mountPath\":\"/hostpid1ns\",\"name\":\"host-proc-ns\"}]},{\"command\":[\"/init-container.sh\"],\"env\":[{\"name\":\"CILIUM_ALL_STATE\",\"valueFrom\":{\"configMapKeyRef\":{\"key\":\"clean-cilium-state\",\"name\":\"cilium-config\",\"optional\":true}}},{\"name\":\"CILIUM_BPF_STATE\",\"valueFrom\":{\"configMapKeyRef\":{\"key\":\"clean-cilium-bpf-state\",\"name\":\"cilium-config\",\"optional\":true}}},{\"name\":\"CILIUM_WAIT_BPF_MOUNT\",\"valueFrom\":{\"configMapKeyRef\":{\"key\":\"wait-bpf-mount\",\"name\":\"cilium-config\",\"optional\":true}}}],\"image\":\"quay.io/cilium/cilium:v1.10.2@sha256:1112a29c8fe04c2a47e5e250112a940c9b81d6700b7e8bba159ab996a05282b9\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"clean-cilium-state\",\"resources\":{\"requests\":{\"cpu\":\"100m\",\"memory\":\"100Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_ADMIN\"]},\"privileged\":true},\"volumeMounts\":[{\"mountPath\":\"/sys/fs/bpf\",\"mountPropagation\":\"HostToContainer\",\"name\":\"bpf-maps\"},{\"mountPath\":\"/run/cilium/cgroupv2\",\"mountPropagation\":\"HostToContainer\",\"name\":\"cilium-cgroup\"},{\"mountPath\":\"/var/run/cilium\",\"name\":\"cilium-run\"}]}],\"priorityClassName\":\"system-node-critical\",\"restartPolicy\":\"Always\",\"serviceAccount\":\"cilium\",\"serviceAccountName\":\"cilium\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/var/run/cilium\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cilium-run\"},{\"hostPath\":{\"path\":\"/sys/fs/bpf\",\"type\":\"DirectoryOrCreate\"},\"name\":\"bpf-maps\"},{\"hostPath\":{\"path\":\"/proc/1/ns\",\"type\":\"Directory\"},\"name\":\"host-proc-ns\"},{\"hostPath\":{\"path\":\"/run/cilium/cgroupv2\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cilium-cgroup\"},{\"hostPath\":{\"path\":\"/opt/cni/bin\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-path\"},{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"etc-cni-netd\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"name\":\"clustermesh-secrets\",\"secret\":{\"defaultMode\":420,\"optional\":true,\"secretName\":\"cilium-clustermesh\"}},{\"configMap\":{\"name\":\"cilium-config\"},\"name\":\"cilium-config-path\"},{\"configMap\":{\"name\":\"cni-configuration\"},\"name\":\"cni-configuration\"},{\"name\":\"hubble-tls\",\"projected\":{\"sources\":[{\"secret\":{\"items\":[{\"key\":\"ca.crt\",\"path\":\"client-ca.crt\"},{\"key\":\"tls.crt\",\"path\":\"server.crt\"},{\"key\":\"tls.key\",\"path\":\"server.key\"}],\"name\":\"hubble-server-certs\",\"optional\":true}}]}}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":2},\"type\":\"RollingUpdate\"}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000e9a460), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e9a480)}, v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000e9a4a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e9a4c0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000e9a5e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"cilium"}, Annotations:map[string]string{"scheduler.alpha.kubernetes.io/critical-pod":""}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cilium-run", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000e9a660), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"bpf-maps", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000e9a680), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"host-proc-ns", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000e9a6a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"cilium-cgroup", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000e9a740), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"cni-path", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000e9a760), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"etc-cni-netd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000e9a780), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000e9a7a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000e9a7c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"clustermesh-secrets", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001876580), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"cilium-config-path", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0018765c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"cni-configuration", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001876600), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"hubble-tls", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc000e9a7e0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"mount-cgroup", Image:"quay.io/cilium/cilium:v1.10.2@sha256:1112a29c8fe04c2a47e5e250112a940c9b81d6700b7e8bba159ab996a05282b9", Command:[]string{"nsenter", "--cgroup=/hostpid1ns/cgroup", "--mount=/hostpid1ns/mnt", "--", "sh", "-c", "mount | grep \"$CGROUP_ROOT type cgroup2\" || { echo \"Mounting cgroup filesystem...\"; mount -t cgroup2 none $CGROUP_ROOT; }"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"CGROUP_ROOT", Value:"/run/cilium/cgroupv2", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"host-proc-ns", ReadOnly:false, MountPath:"/hostpid1ns", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00230f2c0), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"clean-cilium-state", Image:"quay.io/cilium/cilium:v1.10.2@sha256:1112a29c8fe04c2a47e5e250112a940c9b81d6700b7e8bba159ab996a05282b9", Command:[]string{"/init-container.sh"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"CILIUM_ALL_STATE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000e9ab40)}, v1.EnvVar{Name:"CILIUM_BPF_STATE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000e9ab60)}, v1.EnvVar{Name:"CILIUM_WAIT_BPF_MOUNT", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000e9ab80)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"bpf-maps", ReadOnly:false, MountPath:"/sys/fs/bpf", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(0xc0015ba7c0), SubPathExpr:""}, v1.VolumeMount{Name:"cilium-cgroup", ReadOnly:false, MountPath:"/run/cilium/cgroupv2", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(0xc0015ba7d0), SubPathExpr:""}, v1.VolumeMount{Name:"cilium-run", ReadOnly:false, MountPath:"/var/run/cilium", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00230f320), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"cilium-agent", Image:"quay.io/cilium/cilium:v1.10.2@sha256:1112a29c8fe04c2a47e5e250112a940c9b81d6700b7e8bba159ab996a05282b9", Command:[]string{"cilium-agent"}, Args:[]string{"--config-dir=/tmp/cilium/config-map"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"K8S_NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000e9a840)}, v1.EnvVar{Name:"CILIUM_K8S_NAMESPACE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000e9a880)}, v1.EnvVar{Name:"CILIUM_CLUSTERMESH_CONFIG", Value:"/var/lib/cilium/clustermesh/", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CILIUM_CNI_CHAINING_MODE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000e9a8c0)}, v1.EnvVar{Name:"CILIUM_CUSTOM_CNI_CONF", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000e9a8e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"bpf-maps", ReadOnly:false, MountPath:"/sys/fs/bpf", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"cilium-run", ReadOnly:false, MountPath:"/var/run/cilium", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"cni-path", ReadOnly:false, MountPath:"/host/opt/cni/bin", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"etc-cni-netd", ReadOnly:false, MountPath:"/host/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"clustermesh-secrets", ReadOnly:true, MountPath:"/var/lib/cilium/clustermesh", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"cilium-config-path", ReadOnly:true, MountPath:"/tmp/cilium/config-map", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"cni-configuration", ReadOnly:true, MountPath:"/tmp/cni-configuration", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"hubble-tls", ReadOnly:true, MountPath:"/var/lib/cilium/tls/hubble", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc001c507e0), ReadinessProbe:(*v1.Probe)(0xc001c50810), StartupProbe:(*v1.Probe)(0xc001c508d0), Lifecycle:(*v1.Lifecycle)(0xc0015ba730), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00230f200), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00094f788), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"cilium", DeprecatedServiceAccount:"cilium", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000673180), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc000e9aa20), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001e58900)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00094fd18)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:1, NumberAvailable:0, NumberUnavailable:3, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "cilium": the object has been modified; please apply your changes to the latest version and try again
I0706 20:43:41.993005 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-9kb84"
I0706 20:43:42.026375 1 event.go:291] "Event occurred" object="kube-system/kube-dns" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again"
I0706 20:43:46.529558 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ciliumendpoints.cilium.io
I0706 20:43:46.530220 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ciliumlocalredirectpolicies.cilium.io
I0706 20:43:46.530274 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ciliumnetworkpolicies.cilium.io
I0706 20:43:46.531109 1 shared_informer.go:240] Waiting for caches to sync for resource quota
I0706 20:43:46.631846 1 shared_informer.go:247] Caches are synced for resource quota
I0706 20:43:47.819852 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0706 20:43:48.020499 1 shared_informer.go:247] Caches are synced for garbage collector
I0706 20:44:11.986771 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-t7d4n"
I0706 20:51:03.402713 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-sh854"
*
* ==> kube-proxy [de64726cb4778d2d030bbe3452c21bececf529db3330bb2805fcd2d67c50a4c3] <==
* I0706 20:38:42.084976 1 node.go:172] Successfully retrieved node IP: 192.168.50.9
I0706 20:38:42.085272 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.50.9), assume IPv4 operation
W0706 20:38:42.132069 1 server_others.go:584] Unknown proxy mode "", assuming iptables proxy
I0706 20:38:42.132415 1 server_others.go:185] Using iptables Proxier.
I0706 20:38:42.133724 1 server.go:650] Version: v1.20.7
I0706 20:38:42.134370 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0706 20:38:42.134417 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0706 20:38:42.134790 1 config.go:315] Starting service config controller
I0706 20:38:42.136761 1 shared_informer.go:240] Waiting for caches to sync for service config
I0706 20:38:42.134805 1 config.go:224] Starting endpoint slice config controller
I0706 20:38:42.136798 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0706 20:38:42.236928 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0706 20:38:42.237066 1 shared_informer.go:247] Caches are synced for service config
*
* ==> kube-scheduler [c613519a1ff9a5fba57dcf0d5ed1d764e6b42bd8c22d0dc80ee7cdab3548fdb3] <==
* I0706 20:38:19.008625 1 serving.go:331] Generated self-signed cert in-memory
W0706 20:38:21.590261 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0706 20:38:21.590499 1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0706 20:38:21.590650 1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
W0706 20:38:21.590714 1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0706 20:38:21.619528 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0706 20:38:21.620732 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0706 20:38:21.620750 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0706 20:38:21.620765 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0706 20:38:21.622076 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0706 20:38:21.627269 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0706 20:38:21.630405 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0706 20:38:21.630469 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0706 20:38:21.630530 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0706 20:38:21.630588 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0706 20:38:21.630644 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0706 20:38:21.630696 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0706 20:38:21.631201 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0706 20:38:21.631289 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0706 20:38:21.631347 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0706 20:38:21.631543 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0706 20:38:22.520329 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0706 20:38:22.649870 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0706 20:38:22.811820 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0706 20:38:22.851840 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
I0706 20:38:24.420881 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Logs begin at Tue 2021-07-06 20:37:43 UTC, end at Tue 2021-07-06 21:05:22 UTC. --
Jul 06 20:38:50 calium kubelet[2697]: E0706 20:38:50.296961 2697 plugins.go:747] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
Jul 06 20:38:50 calium kubelet[2697]: E0706 20:38:50.297588 2697 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul 06 20:38:50 calium kubelet[2697]: W0706 20:38:50.297601 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
Jul 06 20:38:50 calium kubelet[2697]: E0706 20:38:50.297613 2697 plugins.go:747] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
Jul 06 20:38:50 calium kubelet[2697]: E0706 20:38:50.297868 2697 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul 06 20:38:50 calium kubelet[2697]: W0706 20:38:50.297884 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
Jul 06 20:38:50 calium kubelet[2697]: E0706 20:38:50.297897 2697 plugins.go:747] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
Jul 06 20:38:50 calium kubelet[2697]: E0706 20:38:50.298300 2697 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul 06 20:38:50 calium kubelet[2697]: W0706 20:38:50.298322 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
Jul 06 20:38:50 calium kubelet[2697]: E0706 20:38:50.298341 2697 plugins.go:747] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
Jul 06 20:38:50 calium kubelet[2697]: E0706 20:38:50.298630 2697 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul 06 20:38:50 calium kubelet[2697]: W0706 20:38:50.298645 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
Jul 06 20:38:50 calium kubelet[2697]: E0706 20:38:50.298657 2697 plugins.go:747] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
Jul 06 20:38:50 calium kubelet[2697]: E0706 20:38:50.299235 2697 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul 06 20:38:50 calium kubelet[2697]: W0706 20:38:50.299419 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
Jul 06 20:38:50 calium kubelet[2697]: E0706 20:38:50.299519 2697 plugins.go:747] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
Jul 06 20:38:50 calium kubelet[2697]: W0706 20:38:50.432063 2697 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/burstable/pod57fea957-9efa-4675-82ae-d90791c1479e/4a7264cc7c27777f535829810ac9776d472156b0764a40e6db9b00a0c1babd21 WatchSource:0}: task 4a7264cc7c27777f535829810ac9776d472156b0764a40e6db9b00a0c1babd21 not found: not found
Jul 06 20:38:51 calium kubelet[2697]: W0706 20:38:51.937871 2697 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/burstable/pod57fea957-9efa-4675-82ae-d90791c1479e/0db198bf980d2ad068cf289927f1712e582acbedc8b32c7c60e3f39e155c11cb WatchSource:0}: task 0db198bf980d2ad068cf289927f1712e582acbedc8b32c7c60e3f39e155c11cb not found: not found
Jul 06 20:39:00 calium kubelet[2697]: I0706 20:39:00.261575 2697 topology_manager.go:187] [topologymanager] Topology Admit Handler
Jul 06 20:39:00 calium kubelet[2697]: I0706 20:39:00.267019 2697 topology_manager.go:187] [topologymanager] Topology Admit Handler
Jul 06 20:39:00 calium kubelet[2697]: I0706 20:39:00.283259 2697 topology_manager.go:187] [topologymanager] Topology Admit Handler
Jul 06 20:39:00 calium kubelet[2697]: I0706 20:39:00.341168 2697 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "calico-kube-controllers-token-8r6ww" (UniqueName: "kubernetes.io/secret/49f3cee4-f7a6-481b-be1d-20f2ecbc0138-calico-kube-controllers-token-8r6ww") pod "calico-kube-controllers-55ffdb7658-g2lww" (UID: "49f3cee4-f7a6-481b-be1d-20f2ecbc0138")
Jul 06 20:39:00 calium kubelet[2697]: I0706 20:39:00.341217 2697 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/27895701-4f5a-47c2-b29e-391f06a63833-config-volume") pod "coredns-74ff55c5b-l6zk6" (UID: "27895701-4f5a-47c2-b29e-391f06a63833")
Jul 06 20:39:00 calium kubelet[2697]: I0706 20:39:00.341234 2697 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-2kqc5" (UniqueName: "kubernetes.io/secret/27895701-4f5a-47c2-b29e-391f06a63833-coredns-token-2kqc5") pod "coredns-74ff55c5b-l6zk6" (UID: "27895701-4f5a-47c2-b29e-391f06a63833")
Jul 06 20:39:00 calium kubelet[2697]: I0706 20:39:00.341253 2697 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-cdt5m" (UniqueName: "kubernetes.io/secret/2122996e-d352-460c-8c17-53df4e3d7b9f-storage-provisioner-token-cdt5m") pod "storage-provisioner" (UID: "2122996e-d352-460c-8c17-53df4e3d7b9f")
Jul 06 20:39:00 calium kubelet[2697]: I0706 20:39:00.341265 2697 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/2122996e-d352-460c-8c17-53df4e3d7b9f-tmp") pod "storage-provisioner" (UID: "2122996e-d352-460c-8c17-53df4e3d7b9f")
Jul 06 20:39:00 calium kubelet[2697]: E0706 20:39:00.786164 2697 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "6bbcb5843a6556416417123a50bd181768f52229c05870a517211f49be6891d4": stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Jul 06 20:39:00 calium kubelet[2697]: E0706 20:39:00.786229 2697 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-74ff55c5b-l6zk6_kube-system(27895701-4f5a-47c2-b29e-391f06a63833)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "6bbcb5843a6556416417123a50bd181768f52229c05870a517211f49be6891d4": stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Jul 06 20:39:00 calium kubelet[2697]: E0706 20:39:00.786238 2697 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-74ff55c5b-l6zk6_kube-system(27895701-4f5a-47c2-b29e-391f06a63833)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "6bbcb5843a6556416417123a50bd181768f52229c05870a517211f49be6891d4": stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Jul 06 20:39:00 calium kubelet[2697]: E0706 20:39:00.786284 2697 pod_workers.go:191] Error syncing pod 27895701-4f5a-47c2-b29e-391f06a63833 ("coredns-74ff55c5b-l6zk6_kube-system(27895701-4f5a-47c2-b29e-391f06a63833)"), skipping: failed to "CreatePodSandbox" for "coredns-74ff55c5b-l6zk6_kube-system(27895701-4f5a-47c2-b29e-391f06a63833)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-74ff55c5b-l6zk6_kube-system(27895701-4f5a-47c2-b29e-391f06a63833)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"6bbcb5843a6556416417123a50bd181768f52229c05870a517211f49be6891d4\": stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jul 06 20:39:00 calium kubelet[2697]: E0706 20:39:00.807635 2697 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "899880cfd124735aa97fc848eebda69f2405a2224802258b02ee500482fcb9d7": stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Jul 06 20:39:00 calium kubelet[2697]: E0706 20:39:00.807681 2697 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "calico-kube-controllers-55ffdb7658-g2lww_kube-system(49f3cee4-f7a6-481b-be1d-20f2ecbc0138)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "899880cfd124735aa97fc848eebda69f2405a2224802258b02ee500482fcb9d7": stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Jul 06 20:39:00 calium kubelet[2697]: E0706 20:39:00.807691 2697 kuberuntime_manager.go:755] createPodSandbox for pod "calico-kube-controllers-55ffdb7658-g2lww_kube-system(49f3cee4-f7a6-481b-be1d-20f2ecbc0138)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "899880cfd124735aa97fc848eebda69f2405a2224802258b02ee500482fcb9d7": stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Jul 06 20:39:00 calium kubelet[2697]: E0706 20:39:00.807726 2697 pod_workers.go:191] Error syncing pod 49f3cee4-f7a6-481b-be1d-20f2ecbc0138 ("calico-kube-controllers-55ffdb7658-g2lww_kube-system(49f3cee4-f7a6-481b-be1d-20f2ecbc0138)"), skipping: failed to "CreatePodSandbox" for "calico-kube-controllers-55ffdb7658-g2lww_kube-system(49f3cee4-f7a6-481b-be1d-20f2ecbc0138)" with CreatePodSandboxError: "CreatePodSandbox for pod \"calico-kube-controllers-55ffdb7658-g2lww_kube-system(49f3cee4-f7a6-481b-be1d-20f2ecbc0138)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"899880cfd124735aa97fc848eebda69f2405a2224802258b02ee500482fcb9d7\": stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jul 06 20:43:20 calium kubelet[2697]: I0706 20:43:20.445928 2697 topology_manager.go:187] [topologymanager] Topology Admit Handler
Jul 06 20:43:20 calium kubelet[2697]: I0706 20:43:20.586362 2697 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cilium-run" (UniqueName: "kubernetes.io/host-path/49a97d9f-1b6a-4d10-8ebd-6c6be1bd8c30-cilium-run") pod "cilium-xg95x" (UID: "49a97d9f-1b6a-4d10-8ebd-6c6be1bd8c30")
Jul 06 20:43:20 calium kubelet[2697]: I0706 20:43:20.586428 2697 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/49a97d9f-1b6a-4d10-8ebd-6c6be1bd8c30-xtables-lock") pod "cilium-xg95x" (UID: "49a97d9f-1b6a-4d10-8ebd-6c6be1bd8c30")
Jul 06 20:43:20 calium kubelet[2697]: I0706 20:43:20.586442 2697 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cilium-cgroup" (UniqueName: "kubernetes.io/host-path/49a97d9f-1b6a-4d10-8ebd-6c6be1bd8c30-cilium-cgroup") pod "cilium-xg95x" (UID: "49a97d9f-1b6a-4d10-8ebd-6c6be1bd8c30")
Jul 06 20:43:20 calium kubelet[2697]: I0706 20:43:20.586451 2697 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-cni-netd" (UniqueName: "kubernetes.io/host-path/49a97d9f-1b6a-4d10-8ebd-6c6be1bd8c30-etc-cni-netd") pod "cilium-xg95x" (UID: "49a97d9f-1b6a-4d10-8ebd-6c6be1bd8c30")
Jul 06 20:43:20 calium kubelet[2697]: I0706 20:43:20.586490 2697 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-configuration" (UniqueName: "kubernetes.io/configmap/49a97d9f-1b6a-4d10-8ebd-6c6be1bd8c30-cni-configuration") pod "cilium-xg95x" (UID: "49a97d9f-1b6a-4d10-8ebd-6c6be1bd8c30")
Jul 06 20:43:20 calium kubelet[2697]: I0706 20:43:20.586528 2697 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/49a97d9f-1b6a-4d10-8ebd-6c6be1bd8c30-cilium-config-path") pod "cilium-xg95x" (UID: "49a97d9f-1b6a-4d10-8ebd-6c6be1bd8c30")
Jul 06 20:43:20 calium kubelet[2697]: I0706 20:43:20.586556 2697 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/49a97d9f-1b6a-4d10-8ebd-6c6be1bd8c30-lib-modules") pod "cilium-xg95x" (UID: "49a97d9f-1b6a-4d10-8ebd-6c6be1bd8c30")
Jul 06 20:43:20 calium kubelet[2697]: I0706 20:43:20.586567 2697 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "bpf-maps" (UniqueName: "kubernetes.io/host-path/49a97d9f-1b6a-4d10-8ebd-6c6be1bd8c30-bpf-maps") pod "cilium-xg95x" (UID: "49a97d9f-1b6a-4d10-8ebd-6c6be1bd8c30")
Jul 06 20:43:20 calium kubelet[2697]: I0706 20:43:20.586577 2697 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "host-proc-ns" (UniqueName: "kubernetes.io/host-path/49a97d9f-1b6a-4d10-8ebd-6c6be1bd8c30-host-proc-ns") pod "cilium-xg95x" (UID: "49a97d9f-1b6a-4d10-8ebd-6c6be1bd8c30")
Jul 06 20:43:20 calium kubelet[2697]: I0706 20:43:20.586586 2697 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/49a97d9f-1b6a-4d10-8ebd-6c6be1bd8c30-hubble-tls") pod "cilium-xg95x" (UID: "49a97d9f-1b6a-4d10-8ebd-6c6be1bd8c30")
Jul 06 20:43:20 calium kubelet[2697]: I0706 20:43:20.586637 2697 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-path" (UniqueName: "kubernetes.io/host-path/49a97d9f-1b6a-4d10-8ebd-6c6be1bd8c30-cni-path") pod "cilium-xg95x" (UID: "49a97d9f-1b6a-4d10-8ebd-6c6be1bd8c30")
Jul 06 20:43:20 calium kubelet[2697]: I0706 20:43:20.586648 2697 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/49a97d9f-1b6a-4d10-8ebd-6c6be1bd8c30-clustermesh-secrets") pod "cilium-xg95x" (UID: "49a97d9f-1b6a-4d10-8ebd-6c6be1bd8c30")
Jul 06 20:43:20 calium kubelet[2697]: I0706 20:43:20.586695 2697 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cilium-token-fpgk7" (UniqueName: "kubernetes.io/secret/49a97d9f-1b6a-4d10-8ebd-6c6be1bd8c30-cilium-token-fpgk7") pod "cilium-xg95x" (UID: "49a97d9f-1b6a-4d10-8ebd-6c6be1bd8c30")
Jul 06 20:43:39 calium kubelet[2697]: W0706 20:43:39.373959 2697 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/burstable/pod49a97d9f-1b6a-4d10-8ebd-6c6be1bd8c30/7c6ad97edc025c84aa9783d21d0c6776c7781d37df8bfb136c28f5acfc63ec31 WatchSource:0}: task 7c6ad97edc025c84aa9783d21d0c6776c7781d37df8bfb136c28f5acfc63ec31 not found: not found
Jul 06 20:43:47 calium kubelet[2697]: I0706 20:43:47.827307 2697 scope.go:111] [topologymanager] RemoveContainer - Container ID: 6b3e5c2aae63dc4c48192c305d2a053613f56b129f49fc745522f35ac908d984
Jul 06 20:43:47 calium kubelet[2697]: I0706 20:43:47.861459 2697 scope.go:111] [topologymanager] RemoveContainer - Container ID: 6b3e5c2aae63dc4c48192c305d2a053613f56b129f49fc745522f35ac908d984
Jul 06 20:43:47 calium kubelet[2697]: E0706 20:43:47.864736 2697 remote_runtime.go:332] ContainerStatus "6b3e5c2aae63dc4c48192c305d2a053613f56b129f49fc745522f35ac908d984" from runtime service failed: rpc error: code = NotFound desc = an error occurred when try to find container "6b3e5c2aae63dc4c48192c305d2a053613f56b129f49fc745522f35ac908d984": not found
Jul 06 20:43:47 calium kubelet[2697]: W0706 20:43:47.864839 2697 pod_container_deletor.go:52] [pod_container_deletor] DeleteContainer returned error for (id={containerd 6b3e5c2aae63dc4c48192c305d2a053613f56b129f49fc745522f35ac908d984}): failed to get container status "6b3e5c2aae63dc4c48192c305d2a053613f56b129f49fc745522f35ac908d984": rpc error: code = NotFound desc = an error occurred when try to find container "6b3e5c2aae63dc4c48192c305d2a053613f56b129f49fc745522f35ac908d984": not found
Jul 06 20:43:47 calium kubelet[2697]: I0706 20:43:47.990141 2697 reconciler.go:196] operationExecutor.UnmountVolume started for volume "coredns-token-2kqc5" (UniqueName: "kubernetes.io/secret/27895701-4f5a-47c2-b29e-391f06a63833-coredns-token-2kqc5") pod "27895701-4f5a-47c2-b29e-391f06a63833" (UID: "27895701-4f5a-47c2-b29e-391f06a63833")
Jul 06 20:43:47 calium kubelet[2697]: I0706 20:43:47.990170 2697 reconciler.go:196] operationExecutor.UnmountVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/27895701-4f5a-47c2-b29e-391f06a63833-config-volume") pod "27895701-4f5a-47c2-b29e-391f06a63833" (UID: "27895701-4f5a-47c2-b29e-391f06a63833")
Jul 06 20:43:47 calium kubelet[2697]: W0706 20:43:47.992324 2697 empty_dir.go:520] Warning: Failed to clear quota on /var/lib/kubelet/pods/27895701-4f5a-47c2-b29e-391f06a63833/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
Jul 06 20:43:47 calium kubelet[2697]: I0706 20:43:47.992703 2697 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27895701-4f5a-47c2-b29e-391f06a63833-config-volume" (OuterVolumeSpecName: "config-volume") pod "27895701-4f5a-47c2-b29e-391f06a63833" (UID: "27895701-4f5a-47c2-b29e-391f06a63833"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Jul 06 20:43:47 calium kubelet[2697]: I0706 20:43:47.998883 2697 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27895701-4f5a-47c2-b29e-391f06a63833-coredns-token-2kqc5" (OuterVolumeSpecName: "coredns-token-2kqc5") pod "27895701-4f5a-47c2-b29e-391f06a63833" (UID: "27895701-4f5a-47c2-b29e-391f06a63833"). InnerVolumeSpecName "coredns-token-2kqc5". PluginName "kubernetes.io/secret", VolumeGidValue ""
Jul 06 20:43:48 calium kubelet[2697]: I0706 20:43:48.091509 2697 reconciler.go:319] Volume detached for volume "coredns-token-2kqc5" (UniqueName: "kubernetes.io/secret/27895701-4f5a-47c2-b29e-391f06a63833-coredns-token-2kqc5") on node "calium" DevicePath ""
Jul 06 20:43:48 calium kubelet[2697]: I0706 20:43:48.091591 2697 reconciler.go:319] Volume detached for volume "config-volume" (UniqueName: "kubernetes.io/configmap/27895701-4f5a-47c2-b29e-391f06a63833-config-volume") on node "calium" DevicePath ""
*
* ==> storage-provisioner [21bc71ba4dcd72e44320adabe4010efeeedd86bda1cd07229c0ac199e51b4279] <==
* I0706 20:39:01.120298 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0706 20:39:01.146916 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0706 20:39:01.146961 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0706 20:39:01.161195 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0706 20:39:01.162326 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_calium_52212bfc-1d1d-497f-8f0a-88a34d4ab85e!
I0706 20:39:01.163295 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"90639c19-aece-47a8-ae4a-e395e7a9bf79", APIVersion:"v1", ResourceVersion:"577", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' calium_52212bfc-1d1d-497f-8f0a-88a34d4ab85e became leader
I0706 20:39:01.263130 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_calium_52212bfc-1d1d-497f-8f0a-88a34d4ab85e!
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment