Skip to content

Instantly share code, notes, and snippets.

@carlosonunez
Created May 19, 2023 16:25
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save carlosonunez/db4dee97444d6a419ff6c32a1b4bdec6 to your computer and use it in GitHub Desktop.
Save carlosonunez/db4dee97444d6a419ff6c32a1b4bdec6 to your computer and use it in GitHub Desktop.
minikube + qemu2 + !kindnet issue
*
* ==> Audit <==
* |---------|-------------------------------------------------------|----------|------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-------------------------------------------------------|----------|------|---------|---------------------|---------------------|
| start | --cni=calico | minikube | cn | v1.30.1 | 18 May 23 18:37 CDT | 18 May 23 18:38 CDT |
| delete | | minikube | cn | v1.30.1 | 18 May 23 18:39 CDT | 18 May 23 18:39 CDT |
| start | --cni=calico | minikube | cn | v1.30.1 | 18 May 23 18:39 CDT | 18 May 23 18:39 CDT |
| start | --cni=/tmp/antrea.yaml | minikube | cn | v1.30.1 | 18 May 23 18:40 CDT | |
| delete | | minikube | cn | v1.30.1 | 18 May 23 18:40 CDT | 18 May 23 18:40 CDT |
| start | --cni=/tmp/antrea.yaml | minikube | cn | v1.30.1 | 18 May 23 18:40 CDT | 18 May 23 18:41 CDT |
| delete | | minikube | cn | v1.30.1 | 18 May 23 18:42 CDT | 18 May 23 18:42 CDT |
| start | --cni=calico | minikube | cn | v1.30.1 | 18 May 23 18:42 CDT | 18 May 23 18:43 CDT |
| node | add | minikube | cn | v1.30.1 | 18 May 23 18:43 CDT | 18 May 23 18:43 CDT |
| node | add | minikube | cn | v1.30.1 | 18 May 23 18:43 CDT | 18 May 23 18:43 CDT |
| delete | | minikube | cn | v1.30.1 | 18 May 23 18:44 CDT | 18 May 23 18:44 CDT |
| start | --cni=calico --driver=qemu | minikube | cn | v1.30.1 | 18 May 23 18:45 CDT | 18 May 23 18:45 CDT |
| | --network socket_vmnet | | | | | |
| delete | | minikube | cn | v1.30.1 | 18 May 23 18:51 CDT | 18 May 23 18:51 CDT |
| start | --cni=calico --driver=qemu | minikube | cn | v1.30.1 | 18 May 23 18:51 CDT | 18 May 23 18:52 CDT |
| | --network builtin | | | | | |
| delete | | minikube | cn | v1.30.1 | 18 May 23 18:56 CDT | 18 May 23 18:56 CDT |
| start | --cni=calico --driver=qemu --network builtin | minikube | cn | v1.30.1 | 18 May 23 18:56 CDT | |
| | --extra-config=kubelet.PodCIDR=100.64.0.0/16 | | | | | |
| delete | | minikube | cn | v1.30.1 | 19 May 23 09:37 CDT | 19 May 23 09:37 CDT |
| delete | | minikube | cn | v1.30.1 | 19 May 23 09:38 CDT | 19 May 23 09:38 CDT |
| start | --cni=calico --driver=qemu --network builtin | minikube | cn | v1.30.1 | 19 May 23 09:38 CDT | |
| | --extra-config=kubelet.PodCIDR=100.64.0.0/16 | | | | | |
| delete | | minikube | cn | v1.30.1 | 19 May 23 09:40 CDT | 19 May 23 09:40 CDT |
| delete | | minikube | cn | v1.30.1 | 19 May 23 09:44 CDT | 19 May 23 09:44 CDT |
| start | --cni=calico --driver=qemu --network builtin | minikube | cn | v1.30.1 | 19 May 23 09:44 CDT | 19 May 23 09:44 CDT |
| | --extra-config=kubeadm.pod-network-cidr=100.64.0.0/16 | | | | | |
| delete | | minikube | cn | v1.30.1 | 19 May 23 09:53 CDT | 19 May 23 09:53 CDT |
| start | --cni=calico --driver=qemu | minikube | cn | v1.30.1 | 19 May 23 09:53 CDT | 19 May 23 09:54 CDT |
| | --network builtin | | | | | |
| ssh | | minikube | cn | v1.30.1 | 19 May 23 09:56 CDT | 19 May 23 09:57 CDT |
| delete | | minikube | cn | v1.30.1 | 19 May 23 09:58 CDT | 19 May 23 09:58 CDT |
| start | --network-plugin=cni | minikube | cn | v1.30.1 | 19 May 23 09:58 CDT | 19 May 23 09:59 CDT |
| | --cni=calico --driver=qemu | | | | | |
| | --network builtin | | | | | |
| ssh | | minikube | cn | v1.30.1 | 19 May 23 10:09 CDT | |
| delete | | minikube | cn | v1.30.1 | 19 May 23 10:10 CDT | 19 May 23 10:10 CDT |
| start | --network-plugin=cni | minikube | cn | v1.30.1 | 19 May 23 10:10 CDT | 19 May 23 10:11 CDT |
| | --cni=calico | | | | | |
| | --container-runtime=containerd | | | | | |
| | --driver=qemu --network | | | | | |
| | builtin | | | | | |
| ssh | | minikube | cn | v1.30.1 | 19 May 23 10:14 CDT | |
| delete | | minikube | cn | v1.30.1 | 19 May 23 10:18 CDT | 19 May 23 10:18 CDT |
| start | --network-plugin=cni | minikube | cn | v1.30.1 | 19 May 23 10:18 CDT | |
| | --cni=flannel | | | | | |
| | --container-runtime=containerd | | | | | |
| | --driver=qemu --network | | | | | |
| | builtin | | | | | |
| delete | | minikube | cn | v1.30.1 | 19 May 23 10:18 CDT | 19 May 23 10:18 CDT |
| start | --cni=flannel | minikube | cn | v1.30.1 | 19 May 23 10:18 CDT | 19 May 23 10:19 CDT |
| | --container-runtime=containerd | | | | | |
| | --driver=qemu --network | | | | | |
| | builtin | | | | | |
| delete | | minikube | cn | v1.30.1 | 19 May 23 10:22 CDT | 19 May 23 10:22 CDT |
| start | --cni=auto | minikube | cn | v1.30.1 | 19 May 23 10:22 CDT | 19 May 23 10:23 CDT |
| | --container-runtime=containerd | | | | | |
| | --driver=qemu --network | | | | | |
| | builtin | | | | | |
| node | add | minikube | cn | v1.30.1 | 19 May 23 10:23 CDT | |
| delete | | minikube | cn | v1.30.1 | 19 May 23 10:25 CDT | 19 May 23 10:25 CDT |
| start | --cni=flannel | minikube | cn | v1.30.1 | 19 May 23 10:25 CDT | |
| | --container-runtime=containerd | | | | | |
| | --driver=qemu --network | | | | | |
| | builtin | | | | | |
| delete | | minikube | cn | v1.30.1 | 19 May 23 10:25 CDT | 19 May 23 10:25 CDT |
| start | --cni=flannel | minikube | cn | v1.30.1 | 19 May 23 10:25 CDT | 19 May 23 10:26 CDT |
| | --container-runtime=containerd | | | | | |
| | --driver=qemu --network | | | | | |
| | socket_vmnet | | | | | |
| node | add | minikube | cn | v1.30.1 | 19 May 23 10:26 CDT | 19 May 23 10:27 CDT |
| node | add | minikube | cn | v1.30.1 | 19 May 23 10:27 CDT | 19 May 23 10:27 CDT |
| ssh | m02 -- ping 192.168.105.10 | minikube | cn | v1.30.1 | 19 May 23 10:30 CDT | |
| ssh | -n m02 -- ping 192.168.105.10 | minikube | cn | v1.30.1 | 19 May 23 10:30 CDT | 19 May 23 10:30 CDT |
| delete | | minikube | cn | v1.30.1 | 19 May 23 10:31 CDT | 19 May 23 10:31 CDT |
| start | --cni=calico --container-runtime=containerd | minikube | cn | v1.30.1 | 19 May 23 10:31 CDT | 19 May 23 10:32 CDT |
| | --driver=qemu | | | | | |
| | --extra-config=kubelet.cgroup-driver=systemd | | | | | |
| | --network socket_vmnet | | | | | |
| delete | | minikube | cn | v1.30.1 | 19 May 23 10:36 CDT | 19 May 23 10:36 CDT |
| start | --cni=kindnet | minikube | cn | v1.30.1 | 19 May 23 10:36 CDT | 19 May 23 10:37 CDT |
| | --container-runtime=containerd --driver=qemu | | | | | |
| | --extra-config=kubelet.cgroup-driver=systemd | | | | | |
| | --network socket_vmnet | | | | | |
| node | add | minikube | cn | v1.30.1 | 19 May 23 10:37 CDT | 19 May 23 10:38 CDT |
| node | add | minikube | cn | v1.30.1 | 19 May 23 10:38 CDT | 19 May 23 10:38 CDT |
| delete | | minikube | cn | v1.30.1 | 19 May 23 10:45 CDT | 19 May 23 10:45 CDT |
| start | --cni=kindnet | minikube | cn | v1.30.1 | 19 May 23 10:46 CDT | 19 May 23 10:46 CDT |
| | --container-runtime=containerd --driver=qemu | | | | | |
| | --extra-config=kubelet.cgroup-driver=systemd | | | | | |
| | --network builtin | | | | | |
| node | add | minikube | cn | v1.30.1 | 19 May 23 10:47 CDT | |
| node | add | minikube | cn | v1.30.1 | 19 May 23 10:52 CDT | |
| delete | | minikube | cn | v1.30.1 | 19 May 23 11:13 CDT | 19 May 23 11:13 CDT |
| start | --cni=calico --container-runtime=containerd | minikube | cn | v1.30.1 | 19 May 23 11:14 CDT | 19 May 23 11:14 CDT |
| | --driver=qemu | | | | | |
| | --extra-config=kubelet.cgroup-driver=systemd | | | | | |
| | --network builtin | | | | | |
| ssh | -- sudo mkdir /sys/fs/bpf | minikube | cn | v1.30.1 | 19 May 23 11:18 CDT | |
| ssh | -- sudo modprobe bpf | minikube | cn | v1.30.1 | 19 May 23 11:18 CDT | |
|---------|-------------------------------------------------------|----------|------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/05/19 11:14:03
Running on machine: office
Binary: Built with gc go1.20.2 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0519 11:14:03.603333 51807 out.go:296] Setting OutFile to fd 1 ...
I0519 11:14:03.603433 51807 out.go:348] isatty.IsTerminal(1) = false
I0519 11:14:03.603435 51807 out.go:309] Setting ErrFile to fd 2...
I0519 11:14:03.603437 51807 out.go:348] isatty.IsTerminal(2) = false
I0519 11:14:03.603495 51807 root.go:336] Updating PATH: /Users/cn/.minikube/bin
I0519 11:14:03.603794 51807 out.go:303] Setting JSON to false
I0519 11:14:03.622694 51807 start.go:125] hostinfo: {"hostname":"office","uptime":504178,"bootTime":1684008665,"procs":374,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.1","kernelVersion":"22.2.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"bf559689-385c-5eea-a6c1-ecf5815336fd"}
W0519 11:14:03.622776 51807 start.go:133] gopshost.Virtualization returned error: not implemented yet
I0519 11:14:03.629422 51807 out.go:177] * minikube v1.30.1 on Darwin 13.1 (arm64)
I0519 11:14:03.641691 51807 driver.go:375] Setting default libvirt URI to qemu:///system
I0519 11:14:03.641700 51807 notify.go:220] Checking for updates...
I0519 11:14:03.646206 51807 out.go:177] * Using the qemu2 driver based on user configuration
I0519 11:14:03.653583 51807 start.go:295] selected driver: qemu2
I0519 11:14:03.653740 51807 start.go:870] validating driver "qemu2" against <nil>
I0519 11:14:03.653746 51807 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0519 11:14:03.654095 51807 start_flags.go:305] no existing cluster config was found, will generate one from the flags
W0519 11:14:03.654246 51807 out.go:239] ! You are using the QEMU driver without a dedicated network, which doesn't support `minikube service` & `minikube tunnel` commands.
To try the dedicated network see: https://minikube.sigs.k8s.io/docs/drivers/qemu/#networking
I0519 11:14:03.656856 51807 start_flags.go:386] Using suggested 2200MB memory alloc based on sys=8192MB, container=0MB
I0519 11:14:03.656962 51807 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
I0519 11:14:03.657145 51807 cni.go:84] Creating CNI manager for "calico"
I0519 11:14:03.657149 51807 start_flags.go:314] Found "Calico" CNI - setting NetworkPlugin=cni
I0519 11:14:03.657153 51807 start_flags.go:319] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroup-driver Value:systemd}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:builtin Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/homebrew/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/opt/homebrew/var/run/socket_vmnet StaticIP:}
I0519 11:14:03.657553 51807 iso.go:125] acquiring lock: {Name:mk24e5e5d763710f4797f4e12e068bcfac1283bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0519 11:14:03.666301 51807 out.go:177] * Starting control plane node minikube in cluster minikube
I0519 11:14:03.670582 51807 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime containerd
I0519 11:14:03.670599 51807 preload.go:148] Found local preload: /Users/cn/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-containerd-overlay2-arm64.tar.lz4
I0519 11:14:03.670790 51807 cache.go:57] Caching tarball of preloaded images
I0519 11:14:03.671008 51807 preload.go:174] Found /Users/cn/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0519 11:14:03.671011 51807 cache.go:60] Finished verifying existence of preloaded tar for v1.26.3 on containerd
I0519 11:14:03.671200 51807 profile.go:148] Saving config to /Users/cn/.minikube/profiles/minikube/config.json ...
I0519 11:14:03.671208 51807 lock.go:35] WriteFile acquiring /Users/cn/.minikube/profiles/minikube/config.json: {Name:mk868444a3ca307710d7e84d85020fc2c5ebe7b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0519 11:14:03.671343 51807 cache.go:193] Successfully downloaded all kic artifacts
I0519 11:14:03.671498 51807 start.go:364] acquiring machines lock for minikube: {Name:mk6ab90e36fcacb14dcd42de333c1d5393a2f5fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0519 11:14:03.671522 51807 start.go:368] acquired machines lock for "minikube" in 20.041µs
I0519 11:14:03.671527 51807 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.30.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroup-driver Value:systemd}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:builtin Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/homebrew/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/opt/homebrew/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0519 11:14:03.671550 51807 start.go:125] createHost starting for "" (driver="qemu2")
I0519 11:14:03.680306 51807 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
I0519 11:14:03.737071 51807 start.go:159] libmachine.API.Create for "minikube" (driver="qemu2")
I0519 11:14:03.737101 51807 client.go:168] LocalClient.Create starting
I0519 11:14:03.737189 51807 main.go:141] libmachine: Reading certificate data from /Users/cn/.minikube/certs/ca.pem
I0519 11:14:03.737426 51807 main.go:141] libmachine: Decoding PEM data...
I0519 11:14:03.737442 51807 main.go:141] libmachine: Parsing certificate...
I0519 11:14:03.737648 51807 main.go:141] libmachine: Reading certificate data from /Users/cn/.minikube/certs/cert.pem
I0519 11:14:03.737773 51807 main.go:141] libmachine: Decoding PEM data...
I0519 11:14:03.737780 51807 main.go:141] libmachine: Parsing certificate...
I0519 11:14:03.738054 51807 main.go:141] libmachine: port range: 0 -> 65535
I0519 11:14:03.738287 51807 main.go:141] libmachine: Downloading /Users/cn/.minikube/cache/boot2docker.iso from file:///Users/cn/.minikube/cache/iso/arm64/minikube-v1.30.1-arm64.iso...
I0519 11:14:04.167667 51807 main.go:141] libmachine: Creating SSH key...
I0519 11:14:04.292769 51807 main.go:141] libmachine: Creating Disk image...
I0519 11:14:04.292775 51807 main.go:141] libmachine: Creating 20000 MB hard disk image...
I0519 11:14:04.293093 51807 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/cn/.minikube/machines/minikube/disk.qcow2.raw /Users/cn/.minikube/machines/minikube/disk.qcow2
I0519 11:14:04.310429 51807 main.go:141] libmachine: STDOUT:
I0519 11:14:04.310441 51807 main.go:141] libmachine: STDERR:
I0519 11:14:04.310510 51807 main.go:141] libmachine: executing: qemu-img resize /Users/cn/.minikube/machines/minikube/disk.qcow2 +20000M
I0519 11:14:04.319721 51807 main.go:141] libmachine: STDOUT: Image resized.
I0519 11:14:04.319739 51807 main.go:141] libmachine: STDERR:
I0519 11:14:04.319747 51807 main.go:141] libmachine: DONE writing to /Users/cn/.minikube/machines/minikube/disk.qcow2.raw and /Users/cn/.minikube/machines/minikube/disk.qcow2
I0519 11:14:04.319749 51807 main.go:141] libmachine: Starting QEMU VM...
I0519 11:14:04.319852 51807 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.0.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/cn/.minikube/machines/minikube/boot2docker.iso -qmp unix:/Users/cn/.minikube/machines/minikube/monitor,server,nowait -pidfile /Users/cn/.minikube/machines/minikube/qemu.pid -nic user,model=virtio,hostfwd=tcp::52167-:22,hostfwd=tcp::52168-:2376,hostname=minikube -daemonize /Users/cn/.minikube/machines/minikube/disk.qcow2
I0519 11:14:04.376747 51807 main.go:141] libmachine: STDOUT:
I0519 11:14:04.376764 51807 main.go:141] libmachine: STDERR:
I0519 11:14:04.376773 51807 main.go:141] libmachine: Waiting for VM to start (ssh -p 52167 docker@127.0.0.1)...
I0519 11:14:23.908885 51807 machine.go:88] provisioning docker machine ...
I0519 11:14:23.908922 51807 buildroot.go:166] provisioning hostname "minikube"
I0519 11:14:23.909822 51807 main.go:141] libmachine: Using SSH client type: native
I0519 11:14:23.910228 51807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050e3560] 0x1050e5f40 <nil> [] 0s} localhost 52167 <nil> <nil>}
I0519 11:14:23.910234 51807 main.go:141] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0519 11:14:23.978395 51807 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0519 11:14:23.978523 51807 main.go:141] libmachine: Using SSH client type: native
I0519 11:14:23.978814 51807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050e3560] 0x1050e5f40 <nil> [] 0s} localhost 52167 <nil> <nil>}
I0519 11:14:23.978821 51807 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sminikube' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
else
echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts;
fi
fi
I0519 11:14:24.035912 51807 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0519 11:14:24.035921 51807 buildroot.go:172] set auth options {CertDir:/Users/cn/.minikube CaCertPath:/Users/cn/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/cn/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/cn/.minikube/machines/server.pem ServerKeyPath:/Users/cn/.minikube/machines/server-key.pem ClientKeyPath:/Users/cn/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/cn/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/cn/.minikube}
I0519 11:14:24.035936 51807 buildroot.go:174] setting up certificates
I0519 11:14:24.035939 51807 provision.go:83] configureAuth start
I0519 11:14:24.035942 51807 provision.go:138] copyHostCerts
I0519 11:14:24.037564 51807 exec_runner.go:144] found /Users/cn/.minikube/ca.pem, removing ...
I0519 11:14:24.037593 51807 exec_runner.go:207] rm: /Users/cn/.minikube/ca.pem
I0519 11:14:24.037731 51807 exec_runner.go:151] cp: /Users/cn/.minikube/certs/ca.pem --> /Users/cn/.minikube/ca.pem (1066 bytes)
I0519 11:14:24.037903 51807 exec_runner.go:144] found /Users/cn/.minikube/cert.pem, removing ...
I0519 11:14:24.037905 51807 exec_runner.go:207] rm: /Users/cn/.minikube/cert.pem
I0519 11:14:24.037961 51807 exec_runner.go:151] cp: /Users/cn/.minikube/certs/cert.pem --> /Users/cn/.minikube/cert.pem (1111 bytes)
I0519 11:14:24.038067 51807 exec_runner.go:144] found /Users/cn/.minikube/key.pem, removing ...
I0519 11:14:24.038069 51807 exec_runner.go:207] rm: /Users/cn/.minikube/key.pem
I0519 11:14:24.038118 51807 exec_runner.go:151] cp: /Users/cn/.minikube/certs/key.pem --> /Users/cn/.minikube/key.pem (1679 bytes)
I0519 11:14:24.038635 51807 provision.go:112] generating server cert: /Users/cn/.minikube/machines/server.pem ca-key=/Users/cn/.minikube/certs/ca.pem private-key=/Users/cn/.minikube/certs/ca-key.pem org=cn.minikube san=[127.0.0.1 localhost localhost 127.0.0.1 minikube minikube]
I0519 11:14:24.119508 51807 provision.go:172] copyRemoteCerts
I0519 11:14:24.120030 51807 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0519 11:14:24.120046 51807 sshutil.go:53] new ssh client: &{IP:localhost Port:52167 SSHKeyPath:/Users/cn/.minikube/machines/minikube/id_rsa Username:docker}
I0519 11:14:24.150678 51807 ssh_runner.go:362] scp /Users/cn/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1066 bytes)
I0519 11:14:24.157201 51807 ssh_runner.go:362] scp /Users/cn/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
I0519 11:14:24.164500 51807 ssh_runner.go:362] scp /Users/cn/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0519 11:14:24.171344 51807 provision.go:86] duration metric: configureAuth took 135.401708ms
I0519 11:14:24.171349 51807 buildroot.go:189] setting minikube options for container-runtime
I0519 11:14:24.171466 51807 config.go:182] Loaded profile config "minikube": Driver=qemu2, ContainerRuntime=containerd, KubernetesVersion=v1.26.3
I0519 11:14:24.171469 51807 machine.go:91] provisioned docker machine in 262.572458ms
I0519 11:14:24.171470 51807 client.go:171] LocalClient.Create took 20.434548125s
I0519 11:14:24.171484 51807 start.go:167] duration metric: libmachine.API.Create for "minikube" took 20.434598584s
I0519 11:14:24.171486 51807 start.go:300] post-start starting for "minikube" (driver="qemu2")
I0519 11:14:24.171488 51807 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0519 11:14:24.171568 51807 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0519 11:14:24.171573 51807 sshutil.go:53] new ssh client: &{IP:localhost Port:52167 SSHKeyPath:/Users/cn/.minikube/machines/minikube/id_rsa Username:docker}
I0519 11:14:24.201083 51807 ssh_runner.go:195] Run: cat /etc/os-release
I0519 11:14:24.202454 51807 info.go:137] Remote host: Buildroot 2021.02.12
I0519 11:14:24.202459 51807 filesync.go:126] Scanning /Users/cn/.minikube/addons for local assets ...
I0519 11:14:24.202527 51807 filesync.go:126] Scanning /Users/cn/.minikube/files for local assets ...
I0519 11:14:24.202551 51807 start.go:303] post-start completed in 31.0635ms
I0519 11:14:24.202920 51807 profile.go:148] Saving config to /Users/cn/.minikube/profiles/minikube/config.json ...
I0519 11:14:24.203100 51807 start.go:128] duration metric: createHost completed in 20.531729792s
I0519 11:14:24.203163 51807 main.go:141] libmachine: Using SSH client type: native
I0519 11:14:24.203380 51807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050e3560] 0x1050e5f40 <nil> [] 0s} localhost 52167 <nil> <nil>}
I0519 11:14:24.203383 51807 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0519 11:14:24.260799 51807 main.go:141] libmachine: SSH cmd err, output: <nil>: 1684512864.645162545
I0519 11:14:24.260803 51807 fix.go:207] guest clock: 1684512864.645162545
I0519 11:14:24.260806 51807 fix.go:220] Guest: 2023-05-19 11:14:24.645162545 -0500 CDT Remote: 2023-05-19 11:14:24.203102 -0500 CDT m=+20.617580376 (delta=442.060545ms)
I0519 11:14:24.260815 51807 fix.go:191] guest clock delta is within tolerance: 442.060545ms
I0519 11:14:24.260817 51807 start.go:83] releasing machines lock for "minikube", held for 20.589474375s
I0519 11:14:24.260922 51807 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0519 11:14:24.261156 51807 ssh_runner.go:195] Run: cat /version.json
I0519 11:14:24.261161 51807 sshutil.go:53] new ssh client: &{IP:localhost Port:52167 SSHKeyPath:/Users/cn/.minikube/machines/minikube/id_rsa Username:docker}
I0519 11:14:24.262083 51807 sshutil.go:53] new ssh client: &{IP:localhost Port:52167 SSHKeyPath:/Users/cn/.minikube/machines/minikube/id_rsa Username:docker}
I0519 11:14:24.366964 51807 ssh_runner.go:195] Run: systemctl --version
I0519 11:14:24.369211 51807 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0519 11:14:24.371255 51807 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0519 11:14:24.371347 51807 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0519 11:14:24.376608 51807 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0519 11:14:24.376628 51807 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime containerd
I0519 11:14:24.376742 51807 ssh_runner.go:195] Run: sudo crictl images --output json
I0519 11:14:28.412074 51807 ssh_runner.go:235] Completed: sudo crictl images --output json: (4.035324458s)
I0519 11:14:28.412412 51807 containerd.go:606] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.26.3". assuming images are not preloaded.
I0519 11:14:28.413077 51807 ssh_runner.go:195] Run: which lz4
I0519 11:14:28.418175 51807 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0519 11:14:28.421946 51807 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0519 11:14:28.422018 51807 ssh_runner.go:362] scp /Users/cn/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (376417173 bytes)
I0519 11:14:29.233007 51807 containerd.go:553] Took 0.815064 seconds to copy over tarball
I0519 11:14:29.233117 51807 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0519 11:14:30.536879 51807 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.303761625s)
I0519 11:14:30.536887 51807 containerd.go:560] Took 1.303876 seconds to extract the tarball
I0519 11:14:30.536898 51807 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0519 11:14:30.556007 51807 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0519 11:14:30.647224 51807 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0519 11:14:30.653944 51807 start.go:481] detecting cgroup driver to use...
I0519 11:14:30.654204 51807 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0519 11:14:31.174849 51807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0519 11:14:31.179899 51807 docker.go:193] disabling cri-docker service (if available) ...
I0519 11:14:31.179977 51807 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0519 11:14:31.184715 51807 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0519 11:14:31.189834 51807 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0519 11:14:31.268570 51807 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0519 11:14:31.351538 51807 docker.go:209] disabling docker service ...
I0519 11:14:31.351639 51807 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0519 11:14:31.356720 51807 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0519 11:14:31.361143 51807 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0519 11:14:31.431854 51807 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0519 11:14:31.496294 51807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0519 11:14:31.500612 51807 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0519 11:14:31.506106 51807 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0519 11:14:31.509352 51807 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0519 11:14:31.512702 51807 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0519 11:14:31.512782 51807 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0519 11:14:31.515998 51807 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0519 11:14:31.518823 51807 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0519 11:14:31.521650 51807 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0519 11:14:31.524680 51807 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0519 11:14:31.527841 51807 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0519 11:14:31.530874 51807 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0519 11:14:31.533588 51807 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0519 11:14:31.533644 51807 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0519 11:14:31.540085 51807 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0519 11:14:31.542802 51807 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0519 11:14:31.625432 51807 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0519 11:14:31.649569 51807 start.go:528] Will wait 60s for socket path /run/containerd/containerd.sock
I0519 11:14:31.649651 51807 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0519 11:14:31.651382 51807 retry.go:31] will retry after 621.005546ms: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0519 11:14:32.274697 51807 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0519 11:14:32.281899 51807 start.go:549] Will wait 60s for crictl version
I0519 11:14:32.282405 51807 ssh_runner.go:195] Run: which crictl
I0519 11:14:32.286298 51807 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0519 11:14:32.305964 51807 start.go:565] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.0
RuntimeApiVersion: v1alpha2
I0519 11:14:32.306077 51807 ssh_runner.go:195] Run: containerd --version
I0519 11:14:32.317255 51807 ssh_runner.go:195] Run: containerd --version
I0519 11:14:32.359446 51807 out.go:177] * Preparing Kubernetes v1.26.3 on containerd 1.7.0 ...
I0519 11:14:32.381804 51807 ssh_runner.go:195] Run: grep 10.0.2.2 host.minikube.internal$ /etc/hosts
I0519 11:14:32.383130 51807 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0519 11:14:32.391414 51807 out.go:177] - kubelet.cgroup-driver=systemd
I0519 11:14:32.396713 51807 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime containerd
I0519 11:14:32.396800 51807 ssh_runner.go:195] Run: sudo crictl images --output json
I0519 11:14:32.407826 51807 containerd.go:610] all images are preloaded for containerd runtime.
I0519 11:14:32.407832 51807 containerd.go:524] Images already preloaded, skipping extraction
I0519 11:14:32.408166 51807 ssh_runner.go:195] Run: sudo crictl images --output json
I0519 11:14:32.417425 51807 containerd.go:610] all images are preloaded for containerd runtime.
I0519 11:14:32.417431 51807 cache_images.go:84] Images are preloaded, skipping loading
I0519 11:14:32.417661 51807 ssh_runner.go:195] Run: sudo crictl info
I0519 11:14:32.428074 51807 cni.go:84] Creating CNI manager for "calico"
I0519 11:14:32.428309 51807 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0519 11:14:32.428319 51807 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.26.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0519 11:14:32.428385 51807 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.0.2.15
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "minikube"
kubeletExtraArgs:
node-ip: 10.0.2.15
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0519 11:14:32.428526 51807 kubeadm.go:968] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=systemd --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
[Install]
config:
{KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroup-driver Value:systemd}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
I0519 11:14:32.428623 51807 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.3
I0519 11:14:32.431562 51807 binaries.go:44] Found k8s binaries, skipping transfer
I0519 11:14:32.431632 51807 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0519 11:14:32.434116 51807 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (401 bytes)
I0519 11:14:32.439146 51807 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0519 11:14:32.443910 51807 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2081 bytes)
I0519 11:14:32.449304 51807 ssh_runner.go:195] Run: grep 10.0.2.15 control-plane.minikube.internal$ /etc/hosts
I0519 11:14:32.450424 51807 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0519 11:14:32.453914 51807 certs.go:56] Setting up /Users/cn/.minikube/profiles/minikube for IP: 10.0.2.15
I0519 11:14:32.453922 51807 certs.go:186] acquiring lock for shared ca certs: {Name:mkca2538c9d565d7e8e335eeb20c1c5651066142 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0519 11:14:32.454182 51807 certs.go:195] skipping minikubeCA CA generation: /Users/cn/.minikube/ca.key
I0519 11:14:32.454347 51807 certs.go:195] skipping proxyClientCA CA generation: /Users/cn/.minikube/proxy-client-ca.key
I0519 11:14:32.454371 51807 certs.go:315] generating minikube-user signed cert: /Users/cn/.minikube/profiles/minikube/client.key
I0519 11:14:32.454375 51807 crypto.go:68] Generating cert /Users/cn/.minikube/profiles/minikube/client.crt with IP's: []
I0519 11:14:32.768606 51807 crypto.go:156] Writing cert to /Users/cn/.minikube/profiles/minikube/client.crt ...
I0519 11:14:32.768613 51807 lock.go:35] WriteFile acquiring /Users/cn/.minikube/profiles/minikube/client.crt: {Name:mkbbd88d6241b73bbfdc4de6d17f7b49b268a420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0519 11:14:32.769156 51807 crypto.go:164] Writing key to /Users/cn/.minikube/profiles/minikube/client.key ...
I0519 11:14:32.769159 51807 lock.go:35] WriteFile acquiring /Users/cn/.minikube/profiles/minikube/client.key: {Name:mke1dd1dcb10cc8c8f17e71b0f657db9d3e8558c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0519 11:14:32.769361 51807 certs.go:315] generating minikube signed cert: /Users/cn/.minikube/profiles/minikube/apiserver.key.49504c3e
I0519 11:14:32.769368 51807 crypto.go:68] Generating cert /Users/cn/.minikube/profiles/minikube/apiserver.crt.49504c3e with IP's: [10.0.2.15 10.96.0.1 127.0.0.1 10.0.0.1]
I0519 11:14:32.866443 51807 crypto.go:156] Writing cert to /Users/cn/.minikube/profiles/minikube/apiserver.crt.49504c3e ...
I0519 11:14:32.866449 51807 lock.go:35] WriteFile acquiring /Users/cn/.minikube/profiles/minikube/apiserver.crt.49504c3e: {Name:mk7e1c3a6a056eae46fcecc08dcdbb6ad029ec0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0519 11:14:32.866637 51807 crypto.go:164] Writing key to /Users/cn/.minikube/profiles/minikube/apiserver.key.49504c3e ...
I0519 11:14:32.866639 51807 lock.go:35] WriteFile acquiring /Users/cn/.minikube/profiles/minikube/apiserver.key.49504c3e: {Name:mk5c8f1d4a1ac7fc4fa3e58c36eae2d1b3b2487d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0519 11:14:32.866737 51807 certs.go:333] copying /Users/cn/.minikube/profiles/minikube/apiserver.crt.49504c3e -> /Users/cn/.minikube/profiles/minikube/apiserver.crt
I0519 11:14:32.866841 51807 certs.go:337] copying /Users/cn/.minikube/profiles/minikube/apiserver.key.49504c3e -> /Users/cn/.minikube/profiles/minikube/apiserver.key
I0519 11:14:32.866942 51807 certs.go:315] generating aggregator signed cert: /Users/cn/.minikube/profiles/minikube/proxy-client.key
I0519 11:14:32.866949 51807 crypto.go:68] Generating cert /Users/cn/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0519 11:14:32.919787 51807 crypto.go:156] Writing cert to /Users/cn/.minikube/profiles/minikube/proxy-client.crt ...
I0519 11:14:32.919794 51807 lock.go:35] WriteFile acquiring /Users/cn/.minikube/profiles/minikube/proxy-client.crt: {Name:mkb7caf921dc12c5c958d0100fa7ef69566da74c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0519 11:14:32.919983 51807 crypto.go:164] Writing key to /Users/cn/.minikube/profiles/minikube/proxy-client.key ...
I0519 11:14:32.919985 51807 lock.go:35] WriteFile acquiring /Users/cn/.minikube/profiles/minikube/proxy-client.key: {Name:mk1d97379721ae9ba5decd24bff805f9f98c3e77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0519 11:14:32.920242 51807 certs.go:401] found cert: /Users/cn/.minikube/certs/Users/cn/.minikube/certs/ca-key.pem (1679 bytes)
I0519 11:14:32.920370 51807 certs.go:401] found cert: /Users/cn/.minikube/certs/Users/cn/.minikube/certs/ca.pem (1066 bytes)
I0519 11:14:32.920398 51807 certs.go:401] found cert: /Users/cn/.minikube/certs/Users/cn/.minikube/certs/cert.pem (1111 bytes)
I0519 11:14:32.920420 51807 certs.go:401] found cert: /Users/cn/.minikube/certs/Users/cn/.minikube/certs/key.pem (1679 bytes)
I0519 11:14:32.920799 51807 ssh_runner.go:362] scp /Users/cn/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0519 11:14:32.928764 51807 ssh_runner.go:362] scp /Users/cn/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0519 11:14:32.936203 51807 ssh_runner.go:362] scp /Users/cn/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0519 11:14:32.943448 51807 ssh_runner.go:362] scp /Users/cn/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0519 11:14:32.950793 51807 ssh_runner.go:362] scp /Users/cn/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0519 11:14:32.957475 51807 ssh_runner.go:362] scp /Users/cn/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0519 11:14:32.965017 51807 ssh_runner.go:362] scp /Users/cn/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0519 11:14:32.972712 51807 ssh_runner.go:362] scp /Users/cn/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0519 11:14:32.980256 51807 ssh_runner.go:362] scp /Users/cn/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0519 11:14:32.987291 51807 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0519 11:14:32.993316 51807 ssh_runner.go:195] Run: openssl version
I0519 11:14:32.995364 51807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0519 11:14:32.998123 51807 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0519 11:14:32.999519 51807 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 May 18 21:18 /usr/share/ca-certificates/minikubeCA.pem
I0519 11:14:32.999582 51807 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0519 11:14:33.001372 51807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0519 11:14:33.005124 51807 kubeadm.go:401] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.30.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52198 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroup-driver Value:systemd}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:builtin Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/homebrew/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/opt/homebrew/var/run/socket_vmnet StaticIP:}
I0519 11:14:33.005173 51807 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0519 11:14:33.005246 51807 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0519 11:14:33.015718 51807 cri.go:87] found id: ""
I0519 11:14:33.015822 51807 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0519 11:14:33.018684 51807 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0519 11:14:33.021234 51807 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0519 11:14:33.023979 51807 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0519 11:14:33.023991 51807 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0519 11:14:33.052461 51807 kubeadm.go:322] [init] Using Kubernetes version: v1.26.3
I0519 11:14:33.052534 51807 kubeadm.go:322] [preflight] Running pre-flight checks
I0519 11:14:33.130853 51807 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0519 11:14:33.130905 51807 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0519 11:14:33.130946 51807 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0519 11:14:33.179488 51807 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0519 11:14:33.193597 51807 out.go:204] - Generating certificates and keys ...
I0519 11:14:33.193641 51807 kubeadm.go:322] [certs] Using existing ca certificate authority
I0519 11:14:33.193677 51807 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0519 11:14:33.331656 51807 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0519 11:14:33.386947 51807 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0519 11:14:33.453368 51807 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0519 11:14:33.785477 51807 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0519 11:14:34.016039 51807 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0519 11:14:34.016096 51807 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [10.0.2.15 127.0.0.1 ::1]
I0519 11:14:34.181447 51807 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0519 11:14:34.181528 51807 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [10.0.2.15 127.0.0.1 ::1]
I0519 11:14:34.272650 51807 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0519 11:14:34.420025 51807 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0519 11:14:34.485509 51807 kubeadm.go:322] [certs] Generating "sa" key and public key
I0519 11:14:34.485536 51807 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0519 11:14:34.704066 51807 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0519 11:14:34.789467 51807 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0519 11:14:34.873277 51807 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0519 11:14:34.957373 51807 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0519 11:14:34.964255 51807 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0519 11:14:34.964653 51807 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0519 11:14:34.964670 51807 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0519 11:14:35.049862 51807 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0519 11:14:35.055841 51807 out.go:204] - Booting up control plane ...
I0519 11:14:35.055925 51807 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0519 11:14:35.056032 51807 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0519 11:14:35.056069 51807 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0519 11:14:35.056117 51807 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0519 11:14:35.056196 51807 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0519 11:14:39.054742 51807 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.001036 seconds
I0519 11:14:39.054845 51807 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0519 11:14:39.060542 51807 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0519 11:14:39.573801 51807 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
I0519 11:14:39.573901 51807 kubeadm.go:322] [mark-control-plane] Marking the node minikube as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0519 11:14:40.084370 51807 kubeadm.go:322] [bootstrap-token] Using token: c6uqxt.44t5avxo5cuzuw34
I0519 11:14:40.089432 51807 out.go:204] - Configuring RBAC rules ...
I0519 11:14:40.089552 51807 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0519 11:14:40.095147 51807 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0519 11:14:40.099607 51807 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0519 11:14:40.101621 51807 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0519 11:14:40.103606 51807 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0519 11:14:40.105465 51807 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0519 11:14:40.111529 51807 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0519 11:14:40.275528 51807 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
I0519 11:14:40.497503 51807 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
I0519 11:14:40.497892 51807 kubeadm.go:322]
I0519 11:14:40.497922 51807 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
I0519 11:14:40.497924 51807 kubeadm.go:322]
I0519 11:14:40.497958 51807 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
I0519 11:14:40.497959 51807 kubeadm.go:322]
I0519 11:14:40.497970 51807 kubeadm.go:322] mkdir -p $HOME/.kube
I0519 11:14:40.497998 51807 kubeadm.go:322] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0519 11:14:40.498023 51807 kubeadm.go:322] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0519 11:14:40.498026 51807 kubeadm.go:322]
I0519 11:14:40.498050 51807 kubeadm.go:322] Alternatively, if you are the root user, you can run:
I0519 11:14:40.498051 51807 kubeadm.go:322]
I0519 11:14:40.498078 51807 kubeadm.go:322] export KUBECONFIG=/etc/kubernetes/admin.conf
I0519 11:14:40.498079 51807 kubeadm.go:322]
I0519 11:14:40.498109 51807 kubeadm.go:322] You should now deploy a pod network to the cluster.
I0519 11:14:40.498155 51807 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0519 11:14:40.498190 51807 kubeadm.go:322] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0519 11:14:40.498191 51807 kubeadm.go:322]
I0519 11:14:40.498229 51807 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
I0519 11:14:40.498272 51807 kubeadm.go:322] and service account keys on each node and then running the following as root:
I0519 11:14:40.498273 51807 kubeadm.go:322]
I0519 11:14:40.498316 51807 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token c6uqxt.44t5avxo5cuzuw34 \
I0519 11:14:40.498370 51807 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:d508bb00425ff810f48fd03de12dc8dcf35b6bcdf042d9367f870badb657148f \
I0519 11:14:40.498378 51807 kubeadm.go:322] --control-plane
I0519 11:14:40.498380 51807 kubeadm.go:322]
I0519 11:14:40.498421 51807 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
I0519 11:14:40.498426 51807 kubeadm.go:322]
I0519 11:14:40.498464 51807 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token c6uqxt.44t5avxo5cuzuw34 \
I0519 11:14:40.498526 51807 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:d508bb00425ff810f48fd03de12dc8dcf35b6bcdf042d9367f870badb657148f
I0519 11:14:40.498688 51807 kubeadm.go:322] W0519 16:14:33.434220 700 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
I0519 11:14:40.498750 51807 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0519 11:14:40.498753 51807 cni.go:84] Creating CNI manager for "calico"
I0519 11:14:40.507343 51807 out.go:177] * Configuring Calico (Container Networking Interface) ...
I0519 11:14:40.510325 51807 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.3/kubectl ...
I0519 11:14:40.510329 51807 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (235268 bytes)
I0519 11:14:40.520820 51807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0519 11:14:41.149224 51807 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0519 11:14:41.149506 51807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0519 11:14:41.149513 51807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=08896fd1dc362c097c925146c4a0d0dac715ace0 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2023_05_19T11_14_41_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0519 11:14:41.204653 51807 kubeadm.go:1073] duration metric: took 55.254958ms to wait for elevateKubeSystemPrivileges.
I0519 11:14:41.204666 51807 ops.go:34] apiserver oom_adj: -16
I0519 11:14:41.204677 51807 host.go:66] Checking if "minikube" exists ...
I0519 11:14:41.206145 51807 main.go:141] libmachine: Using SSH client type: external
I0519 11:14:41.206161 51807 main.go:141] libmachine: Using SSH private key: /Users/cn/.minikube/machines/minikube/id_rsa (-rw-------)
I0519 11:14:41.206178 51807 main.go:141] libmachine: &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /Users/cn/.minikube/machines/minikube/id_rsa -p 52167] /usr/bin/ssh <nil>}
I0519 11:14:41.206191 51807 main.go:141] libmachine: /usr/bin/ssh -F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /Users/cn/.minikube/machines/minikube/id_rsa -p 52167 -f -NTL 52198:localhost:8443
I0519 11:14:41.240977 51807 kubeadm.go:403] StartCluster complete in 8.235867666s
I0519 11:14:41.241006 51807 settings.go:142] acquiring lock: {Name:mk888dc7fa0eb1cc86198f7bfead11a145c99000 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0519 11:14:41.241086 51807 settings.go:150] Updating kubeconfig: /Users/cn/.kube/config
I0519 11:14:41.241705 51807 lock.go:35] WriteFile acquiring /Users/cn/.kube/config: {Name:mk2ca3656184ef58a870f8a28a7ca7883bf3f3df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0519 11:14:41.242100 51807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0519 11:14:41.242233 51807 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
I0519 11:14:41.242265 51807 addons.go:66] Setting storage-provisioner=true in profile "minikube"
I0519 11:14:41.242273 51807 addons.go:228] Setting addon storage-provisioner=true in "minikube"
I0519 11:14:41.242274 51807 addons.go:66] Setting default-storageclass=true in profile "minikube"
I0519 11:14:41.242289 51807 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0519 11:14:41.242423 51807 config.go:182] Loaded profile config "minikube": Driver=qemu2, ContainerRuntime=containerd, KubernetesVersion=v1.26.3
I0519 11:14:41.242790 51807 host.go:66] Checking if "minikube" exists ...
I0519 11:14:41.249363 51807 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0519 11:14:41.252338 51807 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0519 11:14:41.252342 51807 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0519 11:14:41.252349 51807 sshutil.go:53] new ssh client: &{IP:localhost Port:52167 SSHKeyPath:/Users/cn/.minikube/machines/minikube/id_rsa Username:docker}
I0519 11:14:41.275798 51807 addons.go:228] Setting addon default-storageclass=true in "minikube"
I0519 11:14:41.275813 51807 host.go:66] Checking if "minikube" exists ...
I0519 11:14:41.276607 51807 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
I0519 11:14:41.276611 51807 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0519 11:14:41.276617 51807 sshutil.go:53] new ssh client: &{IP:localhost Port:52167 SSHKeyPath:/Users/cn/.minikube/machines/minikube/id_rsa Username:docker}
I0519 11:14:41.300648 51807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 10.0.2.2 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0519 11:14:41.305290 51807 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0519 11:14:41.314823 51807 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0519 11:14:41.807525 51807 kapi.go:248] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
I0519 11:14:41.807542 51807 start.go:223] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0519 11:14:41.819416 51807 out.go:177] * Verifying Kubernetes components...
I0519 11:14:41.823469 51807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0519 11:14:41.882618 51807 start.go:916] {"host.minikube.internal": 10.0.2.2} host record injected into CoreDNS's ConfigMap
I0519 11:14:41.931349 51807 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
I0519 11:14:41.922868 51807 api_server.go:51] waiting for apiserver process to appear ...
I0519 11:14:41.935390 51807 addons.go:499] enable addons completed in 693.164084ms: enabled=[default-storageclass storage-provisioner]
I0519 11:14:41.935473 51807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0519 11:14:41.939961 51807 api_server.go:71] duration metric: took 132.403875ms to wait for apiserver process to appear ...
I0519 11:14:41.939971 51807 api_server.go:87] waiting for apiserver healthz status ...
I0519 11:14:41.940117 51807 api_server.go:252] Checking apiserver healthz at https://localhost:52198/healthz ...
I0519 11:14:41.943969 51807 api_server.go:278] https://localhost:52198/healthz returned 200:
ok
I0519 11:14:41.947092 51807 api_server.go:140] control plane version: v1.26.3
I0519 11:14:41.947097 51807 api_server.go:130] duration metric: took 7.1235ms to wait for apiserver health ...
I0519 11:14:41.947099 51807 system_pods.go:43] waiting for kube-system pods to appear ...
I0519 11:14:41.950299 51807 system_pods.go:59] 5 kube-system pods found
I0519 11:14:41.950305 51807 system_pods.go:61] "etcd-minikube" [666b7e8d-1903-4d1b-9200-3c4313e82ff3] Pending
I0519 11:14:41.950308 51807 system_pods.go:61] "kube-apiserver-minikube" [609d37cf-dd9f-441a-bb62-31dc9ba25aed] Pending
I0519 11:14:41.950309 51807 system_pods.go:61] "kube-controller-manager-minikube" [ee8c3992-1486-490a-8dde-511081487e5f] Pending
I0519 11:14:41.950311 51807 system_pods.go:61] "kube-scheduler-minikube" [e9cff4d5-ee11-428e-8ebc-4878d9e58ff6] Pending
I0519 11:14:41.950314 51807 system_pods.go:61] "storage-provisioner" [15896bd2-b961-4292-baa6-ea85122bd71c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
I0519 11:14:41.950318 51807 system_pods.go:74] duration metric: took 3.215042ms to wait for pod list to return data ...
I0519 11:14:41.950322 51807 kubeadm.go:578] duration metric: took 142.767708ms to wait for : map[apiserver:true system_pods:true] ...
I0519 11:14:41.950327 51807 node_conditions.go:102] verifying NodePressure condition ...
I0519 11:14:41.951930 51807 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
I0519 11:14:41.951937 51807 node_conditions.go:123] node cpu capacity is 2
I0519 11:14:41.951942 51807 node_conditions.go:105] duration metric: took 1.6135ms to run NodePressure ...
I0519 11:14:41.951946 51807 start.go:228] waiting for startup goroutines ...
I0519 11:14:41.951949 51807 start.go:233] waiting for cluster config update ...
I0519 11:14:41.951953 51807 start.go:242] writing updated cluster config ...
I0519 11:14:41.952209 51807 ssh_runner.go:195] Run: rm -f paused
I0519 11:14:42.154388 51807 start.go:568] kubectl: 1.27.1, cluster: 1.26.3 (minor skew: 1)
I0519 11:14:42.158613 51807 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
23d4cf0367148 c859f97be11ac 8 minutes ago Running kube-proxy 0 f627d447e22cd
1b26016e98ed0 3b6ac91ff8d39 8 minutes ago Running kube-controller-manager 0 cc463ec3cf1e9
f0ea872fa692a fa167119f9a55 8 minutes ago Running kube-scheduler 0 bbc634f4bba2c
204544e2af16a 3f1ae10c5c85d 8 minutes ago Running kube-apiserver 0 9d2f6329198a2
2a4c7582cea90 ef24580282403 8 minutes ago Running etcd 0 33e564d2806fa
*
* ==> containerd <==
* -- Journal begins at Fri 2023-05-19 16:14:17 UTC, ends at Fri 2023-05-19 16:22:55 UTC. --
May 19 16:14:32 minikube containerd[595]: time="2023-05-19T16:14:32.053978507Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
May 19 16:14:32 minikube containerd[595]: time="2023-05-19T16:14:32.053995882Z" level=info msg=serving... address=/run/containerd/containerd.sock
May 19 16:14:32 minikube containerd[595]: time="2023-05-19T16:14:32.054001924Z" level=info msg="containerd successfully booted in 0.011186s"
May 19 16:14:32 minikube containerd[595]: time="2023-05-19T16:14:32.054805590Z" level=info msg="Start subscribing containerd event"
May 19 16:14:32 minikube containerd[595]: time="2023-05-19T16:14:32.054876757Z" level=info msg="Start recovering state"
May 19 16:14:32 minikube containerd[595]: time="2023-05-19T16:14:32.068514924Z" level=info msg="Start event monitor"
May 19 16:14:32 minikube containerd[595]: time="2023-05-19T16:14:32.068533257Z" level=info msg="Start snapshots syncer"
May 19 16:14:32 minikube containerd[595]: time="2023-05-19T16:14:32.068538549Z" level=info msg="Start cni network conf syncer for default"
May 19 16:14:32 minikube containerd[595]: time="2023-05-19T16:14:32.068541965Z" level=info msg="Start streaming server"
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.157406842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-minikube,Uid:96f2f6c9bdc88be3938b73d38d286d58,Namespace:kube-system,Attempt:0,}"
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.159152342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-minikube,Uid:0818f4b1a57de9c3f9c82667e7fcc870,Namespace:kube-system,Attempt:0,}"
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.159444301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-minikube,Uid:b66e5400d627fb8a329212ef9c1f2679,Namespace:kube-system,Attempt:0,}"
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.159788926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-minikube,Uid:52fd999db5ff4014b83a4ad42c06b3b5,Namespace:kube-system,Attempt:0,}"
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.222250092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.222297301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.222331634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.222338926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.225516967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.225550676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.225566884Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.225574301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.230151592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.230202551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.230219176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.230231926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.231235301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.231286509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.231305301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.231320051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.343654217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-minikube,Uid:b66e5400d627fb8a329212ef9c1f2679,Namespace:kube-system,Attempt:0,} returns sandbox id \"33e564d2806fabda8fc0debf0728867c229b9dd74b5ca5f38d2fb8831c582af2\""
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.348363551Z" level=info msg="CreateContainer within sandbox \"33e564d2806fabda8fc0debf0728867c229b9dd74b5ca5f38d2fb8831c582af2\" for container &ContainerMetadata{Name:etcd,Attempt:0,}"
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.360046092Z" level=info msg="CreateContainer within sandbox \"33e564d2806fabda8fc0debf0728867c229b9dd74b5ca5f38d2fb8831c582af2\" for &ContainerMetadata{Name:etcd,Attempt:0,} returns container id \"2a4c7582cea901e2ec941f2d508595b8558bd132b9e99811e232645276638175\""
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.360342301Z" level=info msg="StartContainer for \"2a4c7582cea901e2ec941f2d508595b8558bd132b9e99811e232645276638175\""
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.403612134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-minikube,Uid:96f2f6c9bdc88be3938b73d38d286d58,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d2f6329198a2b0cc6bf97da93924af05046309cfc64d0548db75c51b441cf18\""
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.404637967Z" level=info msg="CreateContainer within sandbox \"9d2f6329198a2b0cc6bf97da93924af05046309cfc64d0548db75c51b441cf18\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.416122342Z" level=info msg="CreateContainer within sandbox \"9d2f6329198a2b0cc6bf97da93924af05046309cfc64d0548db75c51b441cf18\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"204544e2af16acabba60492d9a16d181d4b59ff3e7e3ab05c228bf5e047cf269\""
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.416356384Z" level=info msg="StartContainer for \"204544e2af16acabba60492d9a16d181d4b59ff3e7e3ab05c228bf5e047cf269\""
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.473690759Z" level=info msg="StartContainer for \"2a4c7582cea901e2ec941f2d508595b8558bd132b9e99811e232645276638175\" returns successfully"
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.481194509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-minikube,Uid:52fd999db5ff4014b83a4ad42c06b3b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc463ec3cf1e901eda50fa607f0fa11d12dc22a946de00f53c7c79e39c168708\""
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.481754342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-minikube,Uid:0818f4b1a57de9c3f9c82667e7fcc870,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbc634f4bba2c555a9ce949cbaaf4f2c3abe03d510570bfb25f0b63bf3210cef\""
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.482583926Z" level=info msg="CreateContainer within sandbox \"cc463ec3cf1e901eda50fa607f0fa11d12dc22a946de00f53c7c79e39c168708\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.483746967Z" level=info msg="CreateContainer within sandbox \"bbc634f4bba2c555a9ce949cbaaf4f2c3abe03d510570bfb25f0b63bf3210cef\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.501755092Z" level=info msg="CreateContainer within sandbox \"bbc634f4bba2c555a9ce949cbaaf4f2c3abe03d510570bfb25f0b63bf3210cef\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f0ea872fa692a0efd433400a439325995dae38e2ddec95a381efb495d684e262\""
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.502118259Z" level=info msg="StartContainer for \"f0ea872fa692a0efd433400a439325995dae38e2ddec95a381efb495d684e262\""
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.503451426Z" level=info msg="CreateContainer within sandbox \"cc463ec3cf1e901eda50fa607f0fa11d12dc22a946de00f53c7c79e39c168708\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1b26016e98ed0ea7a011b084ed45190a4d3b63d8b51fb0a4254f877b48bd1f23\""
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.503612342Z" level=info msg="StartContainer for \"1b26016e98ed0ea7a011b084ed45190a4d3b63d8b51fb0a4254f877b48bd1f23\""
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.558869217Z" level=info msg="StartContainer for \"204544e2af16acabba60492d9a16d181d4b59ff3e7e3ab05c228bf5e047cf269\" returns successfully"
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.656867051Z" level=info msg="StartContainer for \"1b26016e98ed0ea7a011b084ed45190a4d3b63d8b51fb0a4254f877b48bd1f23\" returns successfully"
May 19 16:14:36 minikube containerd[595]: time="2023-05-19T16:14:36.676162592Z" level=info msg="StartContainer for \"f0ea872fa692a0efd433400a439325995dae38e2ddec95a381efb495d684e262\" returns successfully"
May 19 16:14:52 minikube containerd[595]: time="2023-05-19T16:14:52.950254425Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
May 19 16:14:54 minikube containerd[595]: time="2023-05-19T16:14:54.589719737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9ht76,Uid:b31e9dd5-c401-4262-83d8-430cfbcc61ad,Namespace:kube-system,Attempt:0,}"
May 19 16:14:54 minikube containerd[595]: time="2023-05-19T16:14:54.599724367Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
May 19 16:14:54 minikube containerd[595]: time="2023-05-19T16:14:54.599758610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 19 16:14:54 minikube containerd[595]: time="2023-05-19T16:14:54.599767212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
May 19 16:14:54 minikube containerd[595]: time="2023-05-19T16:14:54.599773856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 19 16:14:54 minikube containerd[595]: time="2023-05-19T16:14:54.630577739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9ht76,Uid:b31e9dd5-c401-4262-83d8-430cfbcc61ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"f627d447e22cdc3c7bf45637519ad4ec0ec7d11bce971f5c256b260d005a75eb\""
May 19 16:14:54 minikube containerd[595]: time="2023-05-19T16:14:54.632108111Z" level=info msg="CreateContainer within sandbox \"f627d447e22cdc3c7bf45637519ad4ec0ec7d11bce971f5c256b260d005a75eb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
May 19 16:14:54 minikube containerd[595]: time="2023-05-19T16:14:54.641142648Z" level=info msg="CreateContainer within sandbox \"f627d447e22cdc3c7bf45637519ad4ec0ec7d11bce971f5c256b260d005a75eb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"23d4cf03671481d7e94e37ddcd4d4f9e1b914a46ef6b283b60341a33dec233c7\""
May 19 16:14:54 minikube containerd[595]: time="2023-05-19T16:14:54.641574720Z" level=info msg="StartContainer for \"23d4cf03671481d7e94e37ddcd4d4f9e1b914a46ef6b283b60341a33dec233c7\""
May 19 16:14:54 minikube containerd[595]: time="2023-05-19T16:14:54.677433913Z" level=info msg="StartContainer for \"23d4cf03671481d7e94e37ddcd4d4f9e1b914a46ef6b283b60341a33dec233c7\" returns successfully"
*
* ==> describe nodes <==
* Name: minikube
Roles: control-plane
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=minikube
kubernetes.io/os=linux
minikube.k8s.io/commit=08896fd1dc362c097c925146c4a0d0dac715ace0
minikube.k8s.io/name=minikube
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_05_19T11_14_41_0700
minikube.k8s.io/version=v1.30.1
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 19 May 2023 16:14:37 +0000
Taints: node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: minikube
AcquireTime: <unset>
RenewTime: Fri, 19 May 2023 16:22:50 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 19 May 2023 16:19:45 +0000 Fri, 19 May 2023 16:14:37 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 19 May 2023 16:19:45 +0000 Fri, 19 May 2023 16:14:37 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 19 May 2023 16:19:45 +0000 Fri, 19 May 2023 16:14:37 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Fri, 19 May 2023 16:19:45 +0000 Fri, 19 May 2023 16:14:37 +0000 KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Addresses:
InternalIP: 10.0.2.15
Hostname: minikube
Capacity:
cpu: 2
ephemeral-storage: 17784760Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 2148684Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784760Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 2148684Ki
pods: 110
System Info:
Machine ID: a1206734bccd4718a9133720f2d48850
System UUID: a1206734bccd4718a9133720f2d48850
Boot ID: edc441ed-65f8-4c4f-bf06-dd0ea32945e9
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.0
Kubelet Version: v1.26.3
Kube-Proxy Version: v1.26.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system calico-node-fmxmj 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m2s
kube-system etcd-minikube 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 8m15s
kube-system kube-apiserver-minikube 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m16s
kube-system kube-controller-manager-minikube 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m14s
kube-system kube-proxy-9ht76 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m2s
kube-system kube-scheduler-minikube 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m15s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 900m (45%!)(MISSING) 0 (0%!)(MISSING)
memory 100Mi (4%!)(MISSING) 0 (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-32Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-64Ki 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 8m kube-proxy
Normal Starting 8m15s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 8m15s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 8m15s kubelet Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m15s kubelet Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m15s kubelet Node minikube status is now: NodeHasSufficientPID
Normal RegisteredNode 8m3s node-controller Node minikube event: Registered Node minikube in Controller
*
* ==> dmesg <==
* [May19 16:14] ACPI: SRAT not present
[ +0.000000] KASLR disabled due to lack of seed
[ +0.627908] EINJ: EINJ table not found.
[ +0.519873] systemd-fstab-generator[114]: Ignoring "noauto" for root device
[ +0.043292] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000799] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +13.956177] systemd-fstab-generator[496]: Ignoring "noauto" for root device
[ +0.625724] systemd-fstab-generator[524]: Ignoring "noauto" for root device
[ +0.084052] systemd-fstab-generator[535]: Ignoring "noauto" for root device
[ +0.078786] systemd-fstab-generator[548]: Ignoring "noauto" for root device
[ +0.065365] systemd-fstab-generator[559]: Ignoring "noauto" for root device
[ +0.127637] systemd-fstab-generator[588]: Ignoring "noauto" for root device
[ +3.419939] systemd-fstab-generator[768]: Ignoring "noauto" for root device
[ +5.129727] systemd-fstab-generator[1153]: Ignoring "noauto" for root device
[ +14.429974] kauditd_printk_skb: 10 callbacks suppressed
*
* ==> etcd [2a4c7582cea901e2ec941f2d508595b8558bd132b9e99811e232645276638175] <==
* {"level":"warn","ts":"2023-05-19T16:14:36.531Z","caller":"flags/flag.go:93","msg":"unrecognized environment variable","environment-variable":"ETCD_UNSUPPORTED_ARCH=arm64"}
{"level":"info","ts":"2023-05-19T16:14:36.532Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://10.0.2.15:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://10.0.2.15:2380","--initial-cluster=minikube=https://10.0.2.15:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://10.0.2.15:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://10.0.2.15:2380","--name=minikube","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
{"level":"info","ts":"2023-05-19T16:14:36.532Z","caller":"embed/etcd.go:124","msg":"configuring peer listeners","listen-peer-urls":["https://10.0.2.15:2380"]}
{"level":"info","ts":"2023-05-19T16:14:36.532Z","caller":"embed/etcd.go:484","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-05-19T16:14:36.533Z","caller":"embed/etcd.go:132","msg":"configuring client listeners","listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"]}
{"level":"info","ts":"2023-05-19T16:14:36.533Z","caller":"embed/etcd.go:306","msg":"starting an etcd server","etcd-version":"3.5.6","git-sha":"cecbe35ce","go-version":"go1.17.13","go-os":"linux","go-arch":"arm64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":false,"name":"minikube","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"minikube=https://10.0.2.15:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
{"level":"info","ts":"2023-05-19T16:14:36.551Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"2.63275ms"}
{"level":"info","ts":"2023-05-19T16:14:36.583Z","caller":"etcdserver/raft.go:494","msg":"starting local member","local-member-id":"f074a195de705325","cluster-id":"ef296cf39f5d9d66"}
{"level":"info","ts":"2023-05-19T16:14:36.583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=()"}
{"level":"info","ts":"2023-05-19T16:14:36.583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became follower at term 0"}
{"level":"info","ts":"2023-05-19T16:14:36.583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft f074a195de705325 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"}
{"level":"info","ts":"2023-05-19T16:14:36.583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became follower at term 1"}
{"level":"info","ts":"2023-05-19T16:14:36.583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
{"level":"warn","ts":"2023-05-19T16:14:36.587Z","caller":"auth/store.go:1234","msg":"simple token is not cryptographically signed"}
{"level":"info","ts":"2023-05-19T16:14:36.590Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1}
{"level":"info","ts":"2023-05-19T16:14:36.590Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
{"level":"info","ts":"2023-05-19T16:14:36.591Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"f074a195de705325","local-server-version":"3.5.6","cluster-version":"to_be_decided"}
{"level":"info","ts":"2023-05-19T16:14:36.597Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-05-19T16:14:36.602Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-05-19T16:14:36.602Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-05-19T16:14:36.597Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"10.0.2.15:2380"}
{"level":"info","ts":"2023-05-19T16:14:36.602Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"10.0.2.15:2380"}
{"level":"info","ts":"2023-05-19T16:14:36.598Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
{"level":"info","ts":"2023-05-19T16:14:36.598Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
{"level":"info","ts":"2023-05-19T16:14:36.602Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
{"level":"info","ts":"2023-05-19T16:14:36.602Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"info","ts":"2023-05-19T16:14:36.602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
{"level":"info","ts":"2023-05-19T16:14:36.602Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
{"level":"info","ts":"2023-05-19T16:14:36.787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
{"level":"info","ts":"2023-05-19T16:14:36.787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
{"level":"info","ts":"2023-05-19T16:14:36.787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
{"level":"info","ts":"2023-05-19T16:14:36.787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
{"level":"info","ts":"2023-05-19T16:14:36.787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
{"level":"info","ts":"2023-05-19T16:14:36.787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
{"level":"info","ts":"2023-05-19T16:14:36.787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
{"level":"info","ts":"2023-05-19T16:14:36.791Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2023-05-19T16:14:36.795Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:minikube ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
{"level":"info","ts":"2023-05-19T16:14:36.795Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-05-19T16:14:36.795Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
{"level":"info","ts":"2023-05-19T16:14:36.795Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-05-19T16:14:36.795Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2023-05-19T16:14:36.795Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-05-19T16:14:36.799Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-05-19T16:14:36.799Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
{"level":"info","ts":"2023-05-19T16:14:36.811Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-05-19T16:14:36.811Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
*
* ==> kernel <==
* 16:22:55 up 8 min, 0 users, load average: 0.41, 0.35, 0.19
Linux minikube 5.10.57 #1 SMP PREEMPT Mon Apr 3 22:26:25 UTC 2023 aarch64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [204544e2af16acabba60492d9a16d181d4b59ff3e7e3ab05c228bf5e047cf269] <==
* I0519 16:14:37.750250 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0519 16:14:37.750316 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0519 16:14:37.750292 1 secure_serving.go:210] Serving securely on [::]:8443
I0519 16:14:37.750306 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0519 16:14:37.750888 1 apiservice_controller.go:97] Starting APIServiceRegistrationController
I0519 16:14:37.750922 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0519 16:14:37.750953 1 available_controller.go:494] Starting AvailableConditionController
I0519 16:14:37.750969 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0519 16:14:37.751132 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0519 16:14:37.751155 1 shared_informer.go:273] Waiting for caches to sync for cluster_authentication_trust_controller
I0519 16:14:37.751335 1 gc_controller.go:78] Starting apiserver lease garbage collector
I0519 16:14:37.751369 1 controller.go:83] Starting OpenAPI AggregationController
I0519 16:14:37.751437 1 autoregister_controller.go:141] Starting autoregister controller
I0519 16:14:37.751470 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0519 16:14:37.751574 1 customresource_discovery_controller.go:288] Starting DiscoveryController
I0519 16:14:37.751770 1 controller.go:121] Starting legacy_token_tracking_controller
I0519 16:14:37.751790 1 shared_informer.go:273] Waiting for caches to sync for configmaps
I0519 16:14:37.751813 1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
I0519 16:14:37.751830 1 controller.go:80] Starting OpenAPI V3 AggregationController
I0519 16:14:37.751851 1 apf_controller.go:361] Starting API Priority and Fairness config controller
I0519 16:14:37.750312 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
I0519 16:14:37.753168 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0519 16:14:37.763132 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0519 16:14:37.763601 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0519 16:14:37.763628 1 shared_informer.go:273] Waiting for caches to sync for crd-autoregister
I0519 16:14:37.763695 1 controller.go:85] Starting OpenAPI controller
I0519 16:14:37.763721 1 controller.go:85] Starting OpenAPI V3 controller
I0519 16:14:37.763752 1 naming_controller.go:291] Starting NamingConditionController
I0519 16:14:37.763786 1 establishing_controller.go:76] Starting EstablishingController
I0519 16:14:37.763809 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0519 16:14:37.763827 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0519 16:14:37.763845 1 crd_finalizer.go:266] Starting CRDFinalizer
I0519 16:14:37.801898 1 controller.go:615] quota admission added evaluator for: namespaces
I0519 16:14:37.809414 1 shared_informer.go:280] Caches are synced for node_authorizer
I0519 16:14:37.851890 1 shared_informer.go:280] Caches are synced for configmaps
I0519 16:14:37.851934 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0519 16:14:37.851976 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0519 16:14:37.851997 1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
I0519 16:14:37.852158 1 cache.go:39] Caches are synced for autoregister controller
I0519 16:14:37.854935 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0519 16:14:37.855013 1 apf_controller.go:366] Running API Priority and Fairness config worker
I0519 16:14:37.855021 1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
I0519 16:14:37.863708 1 shared_informer.go:280] Caches are synced for crd-autoregister
I0519 16:14:38.622363 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0519 16:14:38.760872 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
I0519 16:14:38.773645 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
I0519 16:14:38.773819 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0519 16:14:38.930963 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0519 16:14:38.941800 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0519 16:14:39.001986 1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
W0519 16:14:39.004098 1 lease.go:251] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
I0519 16:14:39.004468 1 controller.go:615] quota admission added evaluator for: endpoints
I0519 16:14:39.005894 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0519 16:14:39.810706 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0519 16:14:40.654402 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0519 16:14:40.659597 1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
I0519 16:14:40.665350 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0519 16:14:41.297703 1 controller.go:615] quota admission added evaluator for: poddisruptionbudgets.policy
I0519 16:14:53.808175 1 controller.go:615] quota admission added evaluator for: replicasets.apps
I0519 16:14:53.956585 1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
*
* ==> kube-controller-manager [1b26016e98ed0ea7a011b084ed45190a4d3b63d8b51fb0a4254f877b48bd1f23] <==
* I0519 16:14:52.855723 1 shared_informer.go:280] Caches are synced for certificate-csrsigning-kube-apiserver-client
I0519 16:14:52.855738 1 shared_informer.go:280] Caches are synced for taint
I0519 16:14:52.855772 1 shared_informer.go:280] Caches are synced for certificate-csrsigning-kubelet-client
I0519 16:14:52.855792 1 shared_informer.go:280] Caches are synced for certificate-csrsigning-legacy-unknown
I0519 16:14:52.855803 1 shared_informer.go:280] Caches are synced for endpoint
I0519 16:14:52.855811 1 shared_informer.go:280] Caches are synced for certificate-csrsigning-kubelet-serving
I0519 16:14:52.855842 1 shared_informer.go:280] Caches are synced for job
I0519 16:14:52.855879 1 taint_manager.go:206] "Starting NoExecuteTaintManager"
I0519 16:14:52.855921 1 taint_manager.go:211] "Sending events to api server"
I0519 16:14:52.856046 1 node_lifecycle_controller.go:1438] Initializing eviction metric for zone:
W0519 16:14:52.856099 1 node_lifecycle_controller.go:1053] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0519 16:14:52.856120 1 node_lifecycle_controller.go:1204] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0519 16:14:52.856235 1 event.go:294] "Event occurred" object="minikube" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller"
I0519 16:14:52.856340 1 shared_informer.go:280] Caches are synced for ReplicationController
I0519 16:14:52.857277 1 shared_informer.go:280] Caches are synced for endpoint_slice
I0519 16:14:52.860600 1 shared_informer.go:280] Caches are synced for node
I0519 16:14:52.860627 1 range_allocator.go:167] Sending events to api server.
I0519 16:14:52.860646 1 range_allocator.go:171] Starting range CIDR allocator
I0519 16:14:52.860648 1 shared_informer.go:273] Waiting for caches to sync for cidrallocator
I0519 16:14:52.860651 1 shared_informer.go:280] Caches are synced for cidrallocator
I0519 16:14:52.863410 1 range_allocator.go:372] Set node minikube PodCIDR to [10.244.0.0/24]
I0519 16:14:52.864367 1 shared_informer.go:280] Caches are synced for PVC protection
I0519 16:14:52.867446 1 shared_informer.go:280] Caches are synced for endpoint_slice_mirroring
I0519 16:14:52.870602 1 shared_informer.go:280] Caches are synced for disruption
I0519 16:14:52.873680 1 shared_informer.go:280] Caches are synced for bootstrap_signer
I0519 16:14:52.875773 1 shared_informer.go:280] Caches are synced for ephemeral
I0519 16:14:52.877873 1 shared_informer.go:280] Caches are synced for GC
I0519 16:14:52.880990 1 shared_informer.go:280] Caches are synced for expand
I0519 16:14:52.887313 1 shared_informer.go:280] Caches are synced for deployment
I0519 16:14:52.905889 1 shared_informer.go:280] Caches are synced for cronjob
I0519 16:14:52.905914 1 shared_informer.go:280] Caches are synced for PV protection
I0519 16:14:52.905931 1 shared_informer.go:280] Caches are synced for TTL
I0519 16:14:52.906469 1 shared_informer.go:280] Caches are synced for service account
I0519 16:14:52.905933 1 shared_informer.go:280] Caches are synced for ReplicaSet
I0519 16:14:52.908059 1 shared_informer.go:280] Caches are synced for TTL after finished
I0519 16:14:52.908068 1 shared_informer.go:280] Caches are synced for HPA
I0519 16:14:52.908072 1 shared_informer.go:280] Caches are synced for persistent volume
I0519 16:14:52.909139 1 shared_informer.go:280] Caches are synced for namespace
I0519 16:14:52.913874 1 shared_informer.go:280] Caches are synced for daemon sets
I0519 16:14:52.943267 1 shared_informer.go:280] Caches are synced for resource quota
I0519 16:14:52.948690 1 shared_informer.go:280] Caches are synced for stateful set
I0519 16:14:53.010168 1 shared_informer.go:280] Caches are synced for resource quota
I0519 16:14:53.019602 1 shared_informer.go:273] Waiting for caches to sync for garbage collector
I0519 16:14:53.059223 1 shared_informer.go:280] Caches are synced for ClusterRoleAggregator
I0519 16:14:53.108032 1 shared_informer.go:280] Caches are synced for attach detach
I0519 16:14:53.420483 1 shared_informer.go:280] Caches are synced for garbage collector
I0519 16:14:53.506916 1 shared_informer.go:280] Caches are synced for garbage collector
I0519 16:14:53.506928 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0519 16:14:53.811434 1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-787d4945fb to 1"
I0519 16:14:53.811446 1 event.go:294] "Event occurred" object="kube-system/calico-kube-controllers" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set calico-kube-controllers-7bdbfc669 to 1"
I0519 16:14:53.960432 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9ht76"
I0519 16:14:53.963052 1 event.go:294] "Event occurred" object="kube-system/calico-node" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: calico-node-fmxmj"
I0519 16:14:54.110907 1 event.go:294] "Event occurred" object="kube-system/calico-kube-controllers-7bdbfc669" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: calico-kube-controllers-7bdbfc669-k4hjf"
I0519 16:14:54.111027 1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-8w9jg"
E0519 16:14:54.119792 1 disruption.go:617] Error syncing PodDisruptionBudget kube-system/calico-kube-controllers, requeuing: Operation cannot be fulfilled on poddisruptionbudgets.policy "calico-kube-controllers": the object has been modified; please apply your changes to the latest version and try again
W0519 16:15:23.043931 1 shared_informer.go:550] resyncPeriod 15h6m14.446401837s is smaller than resyncCheckPeriod 19h16m43.214965861s and the informer has already started. Changing it to 19h16m43.214965861s
I0519 16:15:23.044095 1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for networkpolicies.crd.projectcalico.org
I0519 16:15:23.044247 1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for networksets.crd.projectcalico.org
I0519 16:15:23.044348 1 shared_informer.go:273] Waiting for caches to sync for resource quota
I0519 16:15:23.044377 1 shared_informer.go:280] Caches are synced for resource quota
*
* ==> kube-proxy [23d4cf03671481d7e94e37ddcd4d4f9e1b914a46ef6b283b60341a33dec233c7] <==
* I0519 16:14:54.699893 1 node.go:163] Successfully retrieved node IP: 10.0.2.15
I0519 16:14:54.699927 1 server_others.go:109] "Detected node IP" address="10.0.2.15"
I0519 16:14:54.699944 1 server_others.go:535] "Using iptables proxy"
I0519 16:14:54.714152 1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0519 16:14:54.714163 1 server_others.go:176] "Using iptables Proxier"
I0519 16:14:54.714196 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0519 16:14:54.714409 1 server.go:655] "Version info" version="v1.26.3"
I0519 16:14:54.714421 1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0519 16:14:54.714898 1 config.go:317] "Starting service config controller"
I0519 16:14:54.714956 1 shared_informer.go:273] Waiting for caches to sync for service config
I0519 16:14:54.715012 1 config.go:226] "Starting endpoint slice config controller"
I0519 16:14:54.715030 1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
I0519 16:14:54.715650 1 config.go:444] "Starting node config controller"
I0519 16:14:54.716566 1 shared_informer.go:273] Waiting for caches to sync for node config
I0519 16:14:54.815172 1 shared_informer.go:280] Caches are synced for endpoint slice config
I0519 16:14:54.815173 1 shared_informer.go:280] Caches are synced for service config
I0519 16:14:54.816785 1 shared_informer.go:280] Caches are synced for node config
*
* ==> kube-scheduler [f0ea872fa692a0efd433400a439325995dae38e2ddec95a381efb495d684e262] <==
* I0519 16:14:37.384254 1 serving.go:348] Generated self-signed cert in-memory
W0519 16:14:37.809282 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0519 16:14:37.809302 1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0519 16:14:37.809307 1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
W0519 16:14:37.809309 1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0519 16:14:37.815543 1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.3"
I0519 16:14:37.815618 1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0519 16:14:37.816498 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0519 16:14:37.816538 1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0519 16:14:37.816839 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0519 16:14:37.816872 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
W0519 16:14:37.819057 1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
W0519 16:14:37.819143 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0519 16:14:37.819157 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0519 16:14:37.819181 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0519 16:14:37.819189 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0519 16:14:37.819204 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0519 16:14:37.819208 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0519 16:14:37.819148 1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
W0519 16:14:37.819129 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0519 16:14:37.819298 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0519 16:14:37.819314 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0519 16:14:37.819337 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0519 16:14:37.819347 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0519 16:14:37.819339 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0519 16:14:37.819326 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0519 16:14:37.819357 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0519 16:14:37.819376 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0519 16:14:37.819381 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0519 16:14:37.819394 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0519 16:14:37.819398 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0519 16:14:37.819409 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0519 16:14:37.819414 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0519 16:14:37.819431 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0519 16:14:37.819435 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0519 16:14:37.819447 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0519 16:14:37.819453 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0519 16:14:37.819467 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0519 16:14:37.819470 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0519 16:14:37.819510 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0519 16:14:37.819541 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0519 16:14:38.744157 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0519 16:14:38.744236 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0519 16:14:38.868519 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0519 16:14:38.868539 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
I0519 16:14:39.417430 1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Journal begins at Fri 2023-05-19 16:14:17 UTC, ends at Fri 2023-05-19 16:22:55 UTC. --
May 19 16:18:30 minikube kubelet[1159]: E0519 16:18:30.835736 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:18:35 minikube kubelet[1159]: E0519 16:18:35.837724 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:18:40 minikube kubelet[1159]: E0519 16:18:40.838669 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:18:45 minikube kubelet[1159]: E0519 16:18:45.841756 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:18:50 minikube kubelet[1159]: E0519 16:18:50.843954 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:18:55 minikube kubelet[1159]: E0519 16:18:55.849711 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:19:00 minikube kubelet[1159]: E0519 16:19:00.851245 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:19:03 minikube kubelet[1159]: E0519 16:19:03.994244 1159 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/8c2b7f66-b59f-40e0-8006-50998596714a-bpffs podName:8c2b7f66-b59f-40e0-8006-50998596714a nodeName:}" failed. No retries permitted until 2023-05-19 16:21:05.994102898 +0000 UTC m=+385.353011304 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "bpffs" (UniqueName: "kubernetes.io/host-path/8c2b7f66-b59f-40e0-8006-50998596714a-bpffs") pod "calico-node-fmxmj" (UID: "8c2b7f66-b59f-40e0-8006-50998596714a") : hostPath type check failed: /sys/fs/bpf is not a directory
May 19 16:19:05 minikube kubelet[1159]: E0519 16:19:05.853803 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:19:10 minikube kubelet[1159]: E0519 16:19:10.856914 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:19:15 minikube kubelet[1159]: E0519 16:19:15.709360 1159 kubelet.go:1821] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[bpffs], unattached volumes=[lib-modules cni-bin-dir kube-api-access-xcvv8 nodeproc cni-log-dir host-local-net-dir cni-net-dir var-run-calico policysync sys-fs xtables-lock var-lib-calico bpffs]: timed out waiting for the condition" pod="kube-system/calico-node-fmxmj"
May 19 16:19:15 minikube kubelet[1159]: E0519 16:19:15.709485 1159 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[bpffs], unattached volumes=[lib-modules cni-bin-dir kube-api-access-xcvv8 nodeproc cni-log-dir host-local-net-dir cni-net-dir var-run-calico policysync sys-fs xtables-lock var-lib-calico bpffs]: timed out waiting for the condition" pod="kube-system/calico-node-fmxmj" podUID=8c2b7f66-b59f-40e0-8006-50998596714a
May 19 16:19:15 minikube kubelet[1159]: E0519 16:19:15.860180 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:19:20 minikube kubelet[1159]: E0519 16:19:20.861887 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:19:25 minikube kubelet[1159]: E0519 16:19:25.864619 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:19:30 minikube kubelet[1159]: E0519 16:19:30.867043 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:19:35 minikube kubelet[1159]: E0519 16:19:35.867973 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:19:40 minikube kubelet[1159]: W0519 16:19:40.736376 1159 machine.go:65] Cannot read vendor id correctly, set empty.
May 19 16:19:40 minikube kubelet[1159]: E0519 16:19:40.870650 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:19:45 minikube kubelet[1159]: E0519 16:19:45.873115 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:19:50 minikube kubelet[1159]: E0519 16:19:50.875263 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:19:55 minikube kubelet[1159]: E0519 16:19:55.876763 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:20:00 minikube kubelet[1159]: E0519 16:20:00.878707 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:20:05 minikube kubelet[1159]: E0519 16:20:05.880464 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:20:10 minikube kubelet[1159]: E0519 16:20:10.882913 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:20:15 minikube kubelet[1159]: E0519 16:20:15.884068 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:20:20 minikube kubelet[1159]: E0519 16:20:20.885420 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:20:25 minikube kubelet[1159]: E0519 16:20:25.887219 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:20:30 minikube kubelet[1159]: E0519 16:20:30.888530 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:20:35 minikube kubelet[1159]: E0519 16:20:35.889536 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:20:40 minikube kubelet[1159]: E0519 16:20:40.890686 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:20:45 minikube kubelet[1159]: E0519 16:20:45.892573 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:20:50 minikube kubelet[1159]: E0519 16:20:50.893866 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:20:55 minikube kubelet[1159]: E0519 16:20:55.896543 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:21:00 minikube kubelet[1159]: E0519 16:21:00.898411 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:21:05 minikube kubelet[1159]: E0519 16:21:05.905373 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:21:06 minikube kubelet[1159]: E0519 16:21:06.037370 1159 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/8c2b7f66-b59f-40e0-8006-50998596714a-bpffs podName:8c2b7f66-b59f-40e0-8006-50998596714a nodeName:}" failed. No retries permitted until 2023-05-19 16:23:08.037314682 +0000 UTC m=+507.396223089 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "bpffs" (UniqueName: "kubernetes.io/host-path/8c2b7f66-b59f-40e0-8006-50998596714a-bpffs") pod "calico-node-fmxmj" (UID: "8c2b7f66-b59f-40e0-8006-50998596714a") : hostPath type check failed: /sys/fs/bpf is not a directory
May 19 16:21:10 minikube kubelet[1159]: E0519 16:21:10.907036 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:21:15 minikube kubelet[1159]: E0519 16:21:15.909935 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:21:20 minikube kubelet[1159]: E0519 16:21:20.911152 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:21:25 minikube kubelet[1159]: E0519 16:21:25.912384 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:21:30 minikube kubelet[1159]: E0519 16:21:30.914658 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:21:33 minikube kubelet[1159]: E0519 16:21:33.705108 1159 kubelet.go:1821] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[bpffs], unattached volumes=[host-local-net-dir cni-net-dir lib-modules var-lib-calico policysync cni-bin-dir sys-fs xtables-lock nodeproc cni-log-dir kube-api-access-xcvv8 bpffs var-run-calico]: timed out waiting for the condition" pod="kube-system/calico-node-fmxmj"
May 19 16:21:33 minikube kubelet[1159]: E0519 16:21:33.705210 1159 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[bpffs], unattached volumes=[host-local-net-dir cni-net-dir lib-modules var-lib-calico policysync cni-bin-dir sys-fs xtables-lock nodeproc cni-log-dir kube-api-access-xcvv8 bpffs var-run-calico]: timed out waiting for the condition" pod="kube-system/calico-node-fmxmj" podUID=8c2b7f66-b59f-40e0-8006-50998596714a
May 19 16:21:35 minikube kubelet[1159]: E0519 16:21:35.916432 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:21:40 minikube kubelet[1159]: E0519 16:21:40.919386 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:21:45 minikube kubelet[1159]: E0519 16:21:45.921490 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:21:50 minikube kubelet[1159]: E0519 16:21:50.923172 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:21:55 minikube kubelet[1159]: E0519 16:21:55.925708 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:22:00 minikube kubelet[1159]: E0519 16:22:00.928395 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:22:05 minikube kubelet[1159]: E0519 16:22:05.934399 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:22:10 minikube kubelet[1159]: E0519 16:22:10.939242 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:22:15 minikube kubelet[1159]: E0519 16:22:15.941335 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:22:20 minikube kubelet[1159]: E0519 16:22:20.944061 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:22:25 minikube kubelet[1159]: E0519 16:22:25.946744 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:22:30 minikube kubelet[1159]: E0519 16:22:30.952013 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:22:35 minikube kubelet[1159]: E0519 16:22:35.953532 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:22:40 minikube kubelet[1159]: E0519 16:22:40.956923 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:22:45 minikube kubelet[1159]: E0519 16:22:45.960358 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 19 16:22:50 minikube kubelet[1159]: E0519 16:22:50.962442 1159 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment