Skip to content

Instantly share code, notes, and snippets.

@conradwt
Created September 15, 2022 20:30
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save conradwt/cf89b331a5f7c76688ff097e3ad97968 to your computer and use it in GitHub Desktop.
Save conradwt/cf89b331a5f7c76688ff097e3ad97968 to your computer and use it in GitHub Desktop.
minikube start --alsologtostderr
➜ minikube start --alsologtostderr
I0915 13:14:58.615237 18863 out.go:296] Setting OutFile to fd 1 ...
I0915 13:14:58.615347 18863 out.go:348] isatty.IsTerminal(1) = true
I0915 13:14:58.615350 18863 out.go:309] Setting ErrFile to fd 2...
I0915 13:14:58.615354 18863 out.go:348] isatty.IsTerminal(2) = true
I0915 13:14:58.615415 18863 root.go:333] Updating PATH: /Users/conradwt/.minikube/bin
I0915 13:14:58.615787 18863 out.go:303] Setting JSON to false
I0915 13:14:58.634514 18863 start.go:115] hostinfo: {"hostname":"Conrads-MacBook-Pro-2.local","uptime":3469,"bootTime":1663269429,"procs":500,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"af1f9cc8-e083-5726-b0ee-9c76ec901807"}
W0915 13:14:58.634611 18863 start.go:123] gopshost.Virtualization returned error: not implemented yet
I0915 13:14:58.656548 18863 out.go:177] πŸ˜„ minikube v1.26.1 on Darwin 12.6 (arm64)
πŸ˜„ minikube v1.26.1 on Darwin 12.6 (arm64)
I0915 13:14:58.702585 18863 notify.go:193] Checking for updates...
I0915 13:14:58.702852 18863 config.go:180] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.1
W0915 13:14:58.702950 18863 out.go:239] ❗ Specified Kubernetes version 1.25.1 is newer than the newest supported version: v1.24.3. Use `minikube config defaults kubernetes-version` for details.
❗ Specified Kubernetes version 1.25.1 is newer than the newest supported version: v1.24.3. Use `minikube config defaults kubernetes-version` for details.
I0915 13:14:58.702970 18863 driver.go:365] Setting default libvirt URI to qemu:///system
I0915 13:14:58.773674 18863 docker.go:137] docker version: linux-20.10.17
I0915 13:14:58.773820 18863 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0915 13:14:58.886195 18863 info.go:265] docker info: {ID:3K6F:7DSV:2UIY:6V6G:O4CW:O4NN:UM3P:IV3V:VU4T:Q5VO:TX6N:WWMI Containers:9 ContainersRunning:5 ContainersPaused:0 ContainersStopped:4 Images:18 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:88 OomKillDisable:false NGoroutines:79 SystemTime:2022-09-15 20:14:58.822781667 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:8321822720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:true ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Err:failed to fetch metadata: fork/exec /Users/conradwt/.docker/cli-plugins/docker-app: no such file or directory Name:app Path:/Users/conradwt/.docker/cli-plugins/docker-app] map[Name:buildx Path:/Users/conradwt/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
I0915 13:14:58.928515 18863 out.go:177] ✨ Using the docker driver based on existing profile
✨ Using the docker driver based on existing profile
I0915 13:14:58.951111 18863 start.go:284] selected driver: docker
I0915 13:14:58.951122 18863 start.go:808] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:7888 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.25.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0915 13:14:58.951685 18863 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0915 13:14:58.951869 18863 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0915 13:14:59.069516 18863 info.go:265] docker info: {ID:3K6F:7DSV:2UIY:6V6G:O4CW:O4NN:UM3P:IV3V:VU4T:Q5VO:TX6N:WWMI Containers:9 ContainersRunning:5 ContainersPaused:0 ContainersStopped:4 Images:18 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:88 OomKillDisable:false NGoroutines:79 SystemTime:2022-09-15 20:14:59.001864792 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.10.124-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:8321822720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:true ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Err:failed to fetch metadata: fork/exec /Users/conradwt/.docker/cli-plugins/docker-app: no such file or directory Name:app Path:/Users/conradwt/.docker/cli-plugins/docker-app] map[Name:buildx Path:/Users/conradwt/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
I0915 13:14:59.071849 18863 cni.go:95] Creating CNI manager for ""
I0915 13:14:59.071876 18863 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0915 13:14:59.071883 18863 start_flags.go:310] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:7888 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.25.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0915 13:14:59.112719 18863 out.go:177] πŸ‘ Starting control plane node minikube in cluster minikube
πŸ‘ Starting control plane node minikube in cluster minikube
I0915 13:14:59.135673 18863 cache.go:120] Beginning downloading kic base image for docker with docker
I0915 13:14:59.158824 18863 out.go:177] 🚜 Pulling base image ...
🚜 Pulling base image ...
I0915 13:14:59.199779 18863 preload.go:132] Checking if preload exists for k8s version v1.25.1 and runtime docker
I0915 13:14:59.199804 18863 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon
I0915 13:14:59.199833 18863 preload.go:148] Found local preload: /Users/conradwt/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.1-docker-overlay2-arm64.tar.lz4
I0915 13:14:59.199846 18863 cache.go:57] Caching tarball of preloaded images
I0915 13:14:59.199967 18863 preload.go:174] Found /Users/conradwt/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0915 13:14:59.199979 18863 cache.go:60] Finished verifying existence of preloaded tar for v1.25.1 on docker
I0915 13:14:59.200492 18863 profile.go:148] Saving config to /Users/conradwt/.minikube/profiles/minikube/config.json ...
I0915 13:14:59.269767 18863 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon, skipping pull
I0915 13:14:59.269786 18863 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 exists in daemon, skipping load
I0915 13:14:59.269793 18863 cache.go:208] Successfully downloaded all kic artifacts
I0915 13:14:59.269844 18863 start.go:371] acquiring machines lock for minikube: {Name:mkbe7d97b3363b30311694eab5bb244e156aa3b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0915 13:14:59.269937 18863 start.go:375] acquired machines lock for "minikube" in 73.667Β΅s
I0915 13:14:59.269950 18863 start.go:95] Skipping create...Using existing machine configuration
I0915 13:14:59.269955 18863 fix.go:55] fixHost starting:
I0915 13:14:59.270240 18863 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
I0915 13:14:59.328133 18863 fix.go:103] recreateIfNeeded on minikube: state=Stopped err=<nil>
W0915 13:14:59.328172 18863 fix.go:129] unexpected machine state, will restart: <nil>
I0915 13:14:59.375230 18863 out.go:177] πŸ”„ Restarting existing docker container for "minikube" ...
πŸ”„ Restarting existing docker container for "minikube" ...
I0915 13:14:59.398532 18863 cli_runner.go:164] Run: docker start minikube
I0915 13:14:59.664143 18863 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
I0915 13:14:59.718913 18863 kic.go:415] container "minikube" state is running.
I0915 13:14:59.719507 18863 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0915 13:14:59.769655 18863 profile.go:148] Saving config to /Users/conradwt/.minikube/profiles/minikube/config.json ...
I0915 13:14:59.769995 18863 machine.go:88] provisioning docker machine ...
I0915 13:14:59.770010 18863 ubuntu.go:169] provisioning hostname "minikube"
I0915 13:14:59.770104 18863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0915 13:14:59.821128 18863 main.go:134] libmachine: Using SSH client type: native
I0915 13:14:59.821363 18863 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1023f6a90] 0x1023f95c0 <nil> [] 0s} 127.0.0.1 53018 <nil> <nil>}
I0915 13:14:59.821374 18863 main.go:134] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0915 13:14:59.929034 18863 main.go:134] libmachine: SSH cmd err, output: <nil>: minikube
I0915 13:14:59.929177 18863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0915 13:14:59.979448 18863 main.go:134] libmachine: Using SSH client type: native
I0915 13:14:59.979638 18863 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1023f6a90] 0x1023f95c0 <nil> [] 0s} 127.0.0.1 53018 <nil> <nil>}
I0915 13:14:59.979649 18863 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\sminikube' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
else
echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts;
fi
fi
I0915 13:15:00.082866 18863 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0915 13:15:00.082888 18863 ubuntu.go:175] set auth options {CertDir:/Users/conradwt/.minikube CaCertPath:/Users/conradwt/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/conradwt/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/conradwt/.minikube/machines/server.pem ServerKeyPath:/Users/conradwt/.minikube/machines/server-key.pem ClientKeyPath:/Users/conradwt/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/conradwt/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/conradwt/.minikube}
I0915 13:15:00.082902 18863 ubuntu.go:177] setting up certificates
I0915 13:15:00.082908 18863 provision.go:83] configureAuth start
I0915 13:15:00.083016 18863 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0915 13:15:00.133056 18863 provision.go:138] copyHostCerts
I0915 13:15:00.133153 18863 exec_runner.go:144] found /Users/conradwt/.minikube/ca.pem, removing ...
I0915 13:15:00.133160 18863 exec_runner.go:207] rm: /Users/conradwt/.minikube/ca.pem
I0915 13:15:00.133270 18863 exec_runner.go:151] cp: /Users/conradwt/.minikube/certs/ca.pem --> /Users/conradwt/.minikube/ca.pem (1042 bytes)
I0915 13:15:00.133436 18863 exec_runner.go:144] found /Users/conradwt/.minikube/cert.pem, removing ...
I0915 13:15:00.133439 18863 exec_runner.go:207] rm: /Users/conradwt/.minikube/cert.pem
I0915 13:15:00.133490 18863 exec_runner.go:151] cp: /Users/conradwt/.minikube/certs/cert.pem --> /Users/conradwt/.minikube/cert.pem (1082 bytes)
I0915 13:15:00.133583 18863 exec_runner.go:144] found /Users/conradwt/.minikube/key.pem, removing ...
I0915 13:15:00.133586 18863 exec_runner.go:207] rm: /Users/conradwt/.minikube/key.pem
I0915 13:15:00.133630 18863 exec_runner.go:151] cp: /Users/conradwt/.minikube/certs/key.pem --> /Users/conradwt/.minikube/key.pem (1679 bytes)
I0915 13:15:00.133707 18863 provision.go:112] generating server cert: /Users/conradwt/.minikube/machines/server.pem ca-key=/Users/conradwt/.minikube/certs/ca.pem private-key=/Users/conradwt/.minikube/certs/ca-key.pem org=conradwt.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
I0915 13:15:00.167700 18863 provision.go:172] copyRemoteCerts
I0915 13:15:00.167828 18863 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0915 13:15:00.167877 18863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0915 13:15:00.217205 18863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53018 SSHKeyPath:/Users/conradwt/.minikube/machines/minikube/id_rsa Username:docker}
I0915 13:15:00.293015 18863 ssh_runner.go:362] scp /Users/conradwt/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1042 bytes)
I0915 13:15:00.304119 18863 ssh_runner.go:362] scp /Users/conradwt/.minikube/machines/server.pem --> /etc/docker/server.pem (1159 bytes)
I0915 13:15:00.314488 18863 ssh_runner.go:362] scp /Users/conradwt/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0915 13:15:00.324707 18863 provision.go:86] duration metric: configureAuth took 241.784541ms
I0915 13:15:00.324727 18863 ubuntu.go:193] setting minikube options for container-runtime
I0915 13:15:00.324868 18863 config.go:180] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.1
I0915 13:15:00.324958 18863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0915 13:15:00.373196 18863 main.go:134] libmachine: Using SSH client type: native
I0915 13:15:00.373343 18863 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1023f6a90] 0x1023f95c0 <nil> [] 0s} 127.0.0.1 53018 <nil> <nil>}
I0915 13:15:00.373350 18863 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0915 13:15:00.479982 18863 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
I0915 13:15:00.480001 18863 ubuntu.go:71] root file system type: overlay
I0915 13:15:00.480130 18863 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0915 13:15:00.480256 18863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0915 13:15:00.535248 18863 main.go:134] libmachine: Using SSH client type: native
I0915 13:15:00.535403 18863 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1023f6a90] 0x1023f95c0 <nil> [] 0s} 127.0.0.1 53018 <nil> <nil>}
I0915 13:15:00.535445 18863 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0915 13:15:00.652464 18863 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0915 13:15:00.652662 18863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0915 13:15:00.708509 18863 main.go:134] libmachine: Using SSH client type: native
I0915 13:15:00.708652 18863 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1023f6a90] 0x1023f95c0 <nil> [] 0s} 127.0.0.1 53018 <nil> <nil>}
I0915 13:15:00.708663 18863 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0915 13:15:00.818895 18863 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0915 13:15:00.818914 18863 machine.go:91] provisioned docker machine in 1.048918667s
I0915 13:15:00.818921 18863 start.go:307] post-start starting for "minikube" (driver="docker")
I0915 13:15:00.818926 18863 start.go:335] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0915 13:15:00.819102 18863 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0915 13:15:00.819187 18863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0915 13:15:00.875941 18863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53018 SSHKeyPath:/Users/conradwt/.minikube/machines/minikube/id_rsa Username:docker}
I0915 13:15:00.959391 18863 ssh_runner.go:195] Run: cat /etc/os-release
I0915 13:15:00.963249 18863 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0915 13:15:00.963284 18863 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0915 13:15:00.963297 18863 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0915 13:15:00.963305 18863 info.go:137] Remote host: Ubuntu 20.04.4 LTS
I0915 13:15:00.963322 18863 filesync.go:126] Scanning /Users/conradwt/.minikube/addons for local assets ...
I0915 13:15:00.963574 18863 filesync.go:126] Scanning /Users/conradwt/.minikube/files for local assets ...
I0915 13:15:00.963645 18863 start.go:310] post-start completed in 144.718667ms
I0915 13:15:00.963828 18863 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0915 13:15:00.963912 18863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0915 13:15:01.019277 18863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53018 SSHKeyPath:/Users/conradwt/.minikube/machines/minikube/id_rsa Username:docker}
I0915 13:15:01.104806 18863 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0915 13:15:01.111491 18863 fix.go:57] fixHost completed within 1.841546542s
I0915 13:15:01.111503 18863 start.go:82] releasing machines lock for "minikube", held for 1.841571667s
I0915 13:15:01.111592 18863 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0915 13:15:01.176922 18863 ssh_runner.go:195] Run: systemctl --version
I0915 13:15:01.177002 18863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0915 13:15:01.178359 18863 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0915 13:15:01.178448 18863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0915 13:15:01.246539 18863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53018 SSHKeyPath:/Users/conradwt/.minikube/machines/minikube/id_rsa Username:docker}
I0915 13:15:01.246593 18863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53018 SSHKeyPath:/Users/conradwt/.minikube/machines/minikube/id_rsa Username:docker}
I0915 13:15:01.452375 18863 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0915 13:15:01.468078 18863 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0915 13:15:01.468304 18863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0915 13:15:01.479555 18863 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0915 13:15:01.491337 18863 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0915 13:15:01.558349 18863 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0915 13:15:01.611176 18863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0915 13:15:01.662309 18863 ssh_runner.go:195] Run: sudo systemctl restart docker
I0915 13:15:01.848013 18863 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0915 13:15:01.894128 18863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0915 13:15:01.942046 18863 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
I0915 13:15:01.947983 18863 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0915 13:15:01.949244 18863 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0915 13:15:01.951589 18863 start.go:471] Will wait 60s for crictl version
I0915 13:15:01.951719 18863 ssh_runner.go:195] Run: sudo crictl version
I0915 13:15:02.009774 18863 start.go:480] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.17
RuntimeApiVersion: 1.41.0
I0915 13:15:02.009998 18863 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0915 13:15:02.032837 18863 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0915 13:15:02.108295 18863 out.go:204] 🐳 Preparing Kubernetes v1.25.1 on Docker 20.10.17 ...
🐳 Preparing Kubernetes v1.25.1 on Docker 20.10.17 ...| I0915 13:15:02.108744 18863 cli_runner.go:164] Run: docker exec -t minikube dig +short host.docker.internal
/ I0915 13:15:02.217728 18863 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
I0915 13:15:02.218197 18863 ssh_runner.go:195] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I0915 13:15:02.220924 18863 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0915 13:15:02.226863 18863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0915 13:15:02.278450 18863 preload.go:132] Checking if preload exists for k8s version v1.25.1 and runtime docker
I0915 13:15:02.278553 18863 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0915 13:15:02.300262 18863 docker.go:611] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.1
registry.k8s.io/kube-proxy:v1.25.1
registry.k8s.io/kube-scheduler:v1.25.1
registry.k8s.io/kube-controller-manager:v1.25.1
k8s.gcr.io/kube-apiserver:v1.24.5
k8s.gcr.io/kube-proxy:v1.24.5
k8s.gcr.io/kube-controller-manager:v1.24.5
k8s.gcr.io/kube-scheduler:v1.24.5
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/pause:3.7
k8s.gcr.io/coredns/coredns:v1.8.6
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0915 13:15:02.300277 18863 docker.go:617] k8s.gcr.io/kube-apiserver:v1.25.1 wasn't preloaded
I0915 13:15:02.300385 18863 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0915 13:15:02.305917 18863 ssh_runner.go:195] Run: which lz4
I0915 13:15:02.308099 18863 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
- I0915 13:15:02.310208 18863 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/preloaded.tar.lz4': No such file or directory
I0915 13:15:02.310236 18863 ssh_runner.go:362] scp /Users/conradwt/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (336363150 bytes)
- I0915 13:15:05.164845 18863 docker.go:576] Took 2.856802 seconds to copy over tarball
I0915 13:15:05.165173 18863 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
| I0915 13:15:06.539257 18863 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.37405225s)
I0915 13:15:06.539339 18863 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0915 13:15:06.594811 18863 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0915 13:15:06.600154 18863 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3839 bytes)
I0915 13:15:06.607594 18863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
/ I0915 13:15:06.655267 18863 ssh_runner.go:195] Run: sudo systemctl restart docker
- I0915 13:15:07.177116 18863 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0915 13:15:07.198053 18863 docker.go:611] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.1
registry.k8s.io/kube-controller-manager:v1.25.1
registry.k8s.io/kube-proxy:v1.25.1
registry.k8s.io/kube-scheduler:v1.25.1
k8s.gcr.io/kube-apiserver:v1.24.5
k8s.gcr.io/kube-controller-manager:v1.24.5
k8s.gcr.io/kube-proxy:v1.24.5
k8s.gcr.io/kube-scheduler:v1.24.5
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/pause:3.7
k8s.gcr.io/coredns/coredns:v1.8.6
k8s.gcr.io/pause:3.6
<none>:<none>
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0915 13:15:07.198081 18863 docker.go:617] k8s.gcr.io/kube-apiserver:v1.25.1 wasn't preloaded
I0915 13:15:07.198089 18863 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.25.1 k8s.gcr.io/kube-controller-manager:v1.25.1 k8s.gcr.io/kube-scheduler:v1.25.1 k8s.gcr.io/kube-proxy:v1.25.1 k8s.gcr.io/pause:3.7 k8s.gcr.io/etcd:3.5.4-0 k8s.gcr.io/coredns/coredns:v1.9.3 gcr.io/k8s-minikube/storage-provisioner:v5]
I0915 13:15:07.203839 18863 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.25.1
I0915 13:15:07.204095 18863 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.25.1
I0915 13:15:07.204320 18863 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.4-0
I0915 13:15:07.204769 18863 image.go:134] retrieving image: k8s.gcr.io/pause:3.7
I0915 13:15:07.205111 18863 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0915 13:15:07.205891 18863 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.9.3
I0915 13:15:07.206441 18863 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.25.1
I0915 13:15:07.206552 18863 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.25.1
I0915 13:15:07.208793 18863 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.25.1: Error: No such image: k8s.gcr.io/kube-proxy:v1.25.1
I0915 13:15:07.209624 18863 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.5.4-0: Error: No such image: k8s.gcr.io/etcd:3.5.4-0
I0915 13:15:07.212041 18863 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I0915 13:15:07.212067 18863 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.25.1: Error: No such image: k8s.gcr.io/kube-scheduler:v1.25.1
I0915 13:15:07.212071 18863 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.25.1: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.25.1
I0915 13:15:07.211956 18863 image.go:177] daemon lookup for k8s.gcr.io/pause:3.7: Error: No such image: k8s.gcr.io/pause:3.7
I0915 13:15:07.211936 18863 image.go:177] daemon lookup for k8s.gcr.io/coredns/coredns:v1.9.3: Error: No such image: k8s.gcr.io/coredns/coredns:v1.9.3
I0915 13:15:07.213035 18863 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.25.1: Error: No such image: k8s.gcr.io/kube-apiserver:v1.25.1
/ I0915 13:15:07.862241 18863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/etcd:3.5.4-0
I0915 13:15:07.893063 18863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/pause:3.7
I0915 13:15:07.909112 18863 cache_images.go:116] "k8s.gcr.io/etcd:3.5.4-0" needs transfer: "k8s.gcr.io/etcd:3.5.4-0" does not exist at hash "8e041a3b0ba8b5f930b1732f7e2ddb654b1739c89b068ff433008d633a51cd03" in container runtime
I0915 13:15:07.909153 18863 docker.go:292] Removing image: k8s.gcr.io/etcd:3.5.4-0
I0915 13:15:07.909258 18863 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/etcd:3.5.4-0
I0915 13:15:07.936418 18863 cache_images.go:286] Loading image from: /Users/conradwt/.minikube/cache/images/arm64/k8s.gcr.io/etcd_3.5.4-0
- I0915 13:15:07.945804 18863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.25.1
I0915 13:15:07.949827 18863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.25.1
I0915 13:15:07.961965 18863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.25.1
I0915 13:15:07.967280 18863 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.25.1" needs transfer: "k8s.gcr.io/kube-scheduler:v1.25.1" does not exist at hash "2ebc629e14a522d5cea7495920b8a7f3d5cf46c683db2a1c21266bad018f56a4" in container runtime
I0915 13:15:07.967305 18863 docker.go:292] Removing image: k8s.gcr.io/kube-scheduler:v1.25.1
I0915 13:15:07.967397 18863 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-scheduler:v1.25.1
I0915 13:15:07.970147 18863 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.25.1" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.25.1" does not exist at hash "402a516050cc0a62f6a1c52a82c35a37bc05f7eb1e4fd5a6a34596840a5ef28c" in container runtime
I0915 13:15:07.970167 18863 docker.go:292] Removing image: k8s.gcr.io/kube-controller-manager:v1.25.1
I0915 13:15:07.970240 18863 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-controller-manager:v1.25.1
I0915 13:15:07.985339 18863 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.25.1" needs transfer: "k8s.gcr.io/kube-proxy:v1.25.1" does not exist at hash "345ec35538c02234522cc404daf215cc526c9877c5336834724aa48a55cf53c5" in container runtime
I0915 13:15:07.985364 18863 docker.go:292] Removing image: k8s.gcr.io/kube-proxy:v1.25.1
I0915 13:15:07.985481 18863 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-proxy:v1.25.1
I0915 13:15:07.993744 18863 cache_images.go:286] Loading image from: /Users/conradwt/.minikube/cache/images/arm64/k8s.gcr.io/kube-controller-manager_v1.25.1
I0915 13:15:07.993752 18863 cache_images.go:286] Loading image from: /Users/conradwt/.minikube/cache/images/arm64/k8s.gcr.io/kube-scheduler_v1.25.1
I0915 13:15:08.005087 18863 cache_images.go:286] Loading image from: /Users/conradwt/.minikube/cache/images/arm64/k8s.gcr.io/kube-proxy_v1.25.1
\ W0915 13:15:08.055425 18863 image.go:265] image k8s.gcr.io/coredns/coredns:v1.9.3 arch mismatch: want arm64 got amd64. fixing
I0915 13:15:08.056020 18863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/coredns/coredns:v1.9.3
I0915 13:15:08.076480 18863 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.9.3" needs transfer: "k8s.gcr.io/coredns/coredns:v1.9.3" does not exist at hash "2307e151c6ff0b2c679ef59252c31058366d46c44c1ca070a22e5c241acaca75" in container runtime
I0915 13:15:08.076534 18863 docker.go:292] Removing image: k8s.gcr.io/coredns/coredns:v1.9.3
I0915 13:15:08.076731 18863 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/coredns/coredns:v1.9.3
I0915 13:15:08.089718 18863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.25.1
I0915 13:15:08.096737 18863 cache_images.go:286] Loading image from: /Users/conradwt/.minikube/cache/images/arm64/k8s.gcr.io/coredns/coredns_v1.9.3
I0915 13:15:08.109763 18863 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.25.1" needs transfer: "k8s.gcr.io/kube-apiserver:v1.25.1" does not exist at hash "e55734e4dd0a3c84cc851984d5ebdbf787b8ecddc3a475f6eb4cb9af4a97dd81" in container runtime
I0915 13:15:08.109829 18863 docker.go:292] Removing image: k8s.gcr.io/kube-apiserver:v1.25.1
I0915 13:15:08.110136 18863 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-apiserver:v1.25.1
I0915 13:15:08.128901 18863 cache_images.go:286] Loading image from: /Users/conradwt/.minikube/cache/images/arm64/k8s.gcr.io/kube-apiserver_v1.25.1
| W0915 13:15:08.142923 18863 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
I0915 13:15:08.143274 18863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
I0915 13:15:08.162753 18863 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
I0915 13:15:08.162825 18863 docker.go:292] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
I0915 13:15:08.163250 18863 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
I0915 13:15:08.190527 18863 cache_images.go:286] Loading image from: /Users/conradwt/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
I0915 13:15:08.191094 18863 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
I0915 13:15:08.193445 18863 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
I0915 13:15:08.193462 18863 docker.go:259] Loading image: /var/lib/minikube/images/storage-provisioner_v5
I0915 13:15:08.193474 18863 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
/ I0915 13:15:08.250406 18863 cache_images.go:315] Transferred and loaded /Users/conradwt/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
I0915 13:15:08.250500 18863 cache_images.go:92] LoadImages completed in 1.05235025s
W0915 13:15:08.250776 18863 out.go:239] ❌ Unable to load cached images: loading cached images: stat /Users/conradwt/.minikube/cache/images/arm64/k8s.gcr.io/etcd_3.5.4-0: no such file or directory
❌ Unable to load cached images: loading cached images: stat /Users/conradwt/.minikube/cache/images/arm64/k8s.gcr.io/etcd_3.5.4-0: no such file or directory
I0915 13:15:08.251226 18863 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0915 13:15:08.298559 18863 cni.go:95] Creating CNI manager for ""
I0915 13:15:08.298587 18863 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0915 13:15:08.298619 18863 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0915 13:15:08.298637 18863 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.25.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0915 13:15:08.298995 18863 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "minikube"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.25.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0915 13:15:08.299142 18863 kubeadm.go:961] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.25.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=minikube --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.25.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0915 13:15:08.299370 18863 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.1
I0915 13:15:08.304037 18863 binaries.go:44] Found k8s binaries, skipping transfer
I0915 13:15:08.304170 18863 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0915 13:15:08.308288 18863 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (470 bytes)
I0915 13:15:08.315459 18863 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0915 13:15:08.322802 18863 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2030 bytes)
I0915 13:15:08.330059 18863 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0915 13:15:08.332189 18863 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0915 13:15:08.337472 18863 certs.go:54] Setting up /Users/conradwt/.minikube/profiles/minikube for IP: 192.168.49.2
I0915 13:15:08.337585 18863 certs.go:182] skipping minikubeCA CA generation: /Users/conradwt/.minikube/ca.key
I0915 13:15:08.337626 18863 certs.go:182] skipping proxyClientCA CA generation: /Users/conradwt/.minikube/proxy-client-ca.key
I0915 13:15:08.337682 18863 certs.go:298] skipping minikube-user signed cert generation: /Users/conradwt/.minikube/profiles/minikube/client.key
I0915 13:15:08.337721 18863 certs.go:298] skipping minikube signed cert generation: /Users/conradwt/.minikube/profiles/minikube/apiserver.key.dd3b5fb2
I0915 13:15:08.337756 18863 certs.go:298] skipping aggregator signed cert generation: /Users/conradwt/.minikube/profiles/minikube/proxy-client.key
I0915 13:15:08.337880 18863 certs.go:388] found cert: /Users/conradwt/.minikube/certs/Users/conradwt/.minikube/certs/ca-key.pem (1679 bytes)
I0915 13:15:08.337904 18863 certs.go:388] found cert: /Users/conradwt/.minikube/certs/Users/conradwt/.minikube/certs/ca.pem (1042 bytes)
I0915 13:15:08.337924 18863 certs.go:388] found cert: /Users/conradwt/.minikube/certs/Users/conradwt/.minikube/certs/cert.pem (1082 bytes)
I0915 13:15:08.337947 18863 certs.go:388] found cert: /Users/conradwt/.minikube/certs/Users/conradwt/.minikube/certs/key.pem (1679 bytes)
I0915 13:15:08.338270 18863 ssh_runner.go:362] scp /Users/conradwt/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1350 bytes)
I0915 13:15:08.347931 18863 ssh_runner.go:362] scp /Users/conradwt/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0915 13:15:08.357487 18863 ssh_runner.go:362] scp /Users/conradwt/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1103 bytes)
I0915 13:15:08.366856 18863 ssh_runner.go:362] scp /Users/conradwt/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0915 13:15:08.376363 18863 ssh_runner.go:362] scp /Users/conradwt/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1066 bytes)
I0915 13:15:08.385973 18863 ssh_runner.go:362] scp /Users/conradwt/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0915 13:15:08.395433 18863 ssh_runner.go:362] scp /Users/conradwt/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1074 bytes)
I0915 13:15:08.404677 18863 ssh_runner.go:362] scp /Users/conradwt/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0915 13:15:08.413913 18863 ssh_runner.go:362] scp /Users/conradwt/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1066 bytes)
I0915 13:15:08.423149 18863 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0915 13:15:08.431542 18863 ssh_runner.go:195] Run: openssl version
I0915 13:15:08.434622 18863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0915 13:15:08.439017 18863 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0915 13:15:08.441236 18863 certs.go:431] hashing: -rw-r--r-- 1 root root 1066 Sep 10 2020 /usr/share/ca-certificates/minikubeCA.pem
I0915 13:15:08.441303 18863 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0915 13:15:08.444343 18863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0915 13:15:08.448563 18863 kubeadm.go:395] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:7888 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.25.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0915 13:15:08.448786 18863 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0915 13:15:08.468052 18863 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0915 13:15:08.472894 18863 kubeadm.go:410] found existing configuration files, will attempt cluster restart
I0915 13:15:08.472911 18863 kubeadm.go:626] restartCluster start
I0915 13:15:08.473013 18863 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0915 13:15:08.476971 18863 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0915 13:15:08.477059 18863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0915 13:15:08.528118 18863 kubeconfig.go:116] verify returned: extract IP: "minikube" does not appear in /Users/conradwt/.kube/config
I0915 13:15:08.528544 18863 kubeconfig.go:127] "minikube" context is missing from /Users/conradwt/.kube/config - will repair!
I0915 13:15:08.529183 18863 lock.go:35] WriteFile acquiring /Users/conradwt/.kube/config: {Name:mkbc4267338bacd60a6afaa07281bd9df9ff204f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 13:15:08.530742 18863 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0915 13:15:08.536150 18863 api_server.go:165] Checking apiserver status ...
I0915 13:15:08.536239 18863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 13:15:08.541251 18863 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 13:15:08.742422 18863 api_server.go:165] Checking apiserver status ...
I0915 13:15:08.742820 18863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 13:15:08.752151 18863 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 13:15:08.942376 18863 api_server.go:165] Checking apiserver status ...
I0915 13:15:08.942730 18863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 13:15:08.963345 18863 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 13:15:09.142360 18863 api_server.go:165] Checking apiserver status ...
I0915 13:15:09.142769 18863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 13:15:09.163213 18863 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 13:15:09.342369 18863 api_server.go:165] Checking apiserver status ...
I0915 13:15:09.342680 18863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 13:15:09.363472 18863 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 13:15:09.542267 18863 api_server.go:165] Checking apiserver status ...
I0915 13:15:09.542647 18863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 13:15:09.564363 18863 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 13:15:09.742387 18863 api_server.go:165] Checking apiserver status ...
I0915 13:15:09.742646 18863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 13:15:09.759041 18863 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 13:15:09.942381 18863 api_server.go:165] Checking apiserver status ...
I0915 13:15:09.942602 18863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 13:15:09.960343 18863 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 13:15:10.142407 18863 api_server.go:165] Checking apiserver status ...
I0915 13:15:10.142705 18863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 13:15:10.159835 18863 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 13:15:10.342430 18863 api_server.go:165] Checking apiserver status ...
I0915 13:15:10.342771 18863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 13:15:10.362991 18863 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 13:15:10.542382 18863 api_server.go:165] Checking apiserver status ...
I0915 13:15:10.542765 18863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 13:15:10.564651 18863 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 13:15:10.741965 18863 api_server.go:165] Checking apiserver status ...
I0915 13:15:10.742385 18863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 13:15:10.762977 18863 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 13:15:10.942360 18863 api_server.go:165] Checking apiserver status ...
I0915 13:15:10.942699 18863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 13:15:10.963405 18863 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 13:15:11.142349 18863 api_server.go:165] Checking apiserver status ...
I0915 13:15:11.142630 18863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 13:15:11.162132 18863 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 13:15:11.341726 18863 api_server.go:165] Checking apiserver status ...
I0915 13:15:11.342084 18863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 13:15:11.362395 18863 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 13:15:11.542369 18863 api_server.go:165] Checking apiserver status ...
I0915 13:15:11.542732 18863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 13:15:11.565282 18863 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 13:15:11.565306 18863 api_server.go:165] Checking apiserver status ...
I0915 13:15:11.565460 18863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 13:15:11.579381 18863 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 13:15:11.579410 18863 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
I0915 13:15:11.579419 18863 kubeadm.go:1092] stopping kube-system containers ...
I0915 13:15:11.579632 18863 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0915 13:15:11.615994 18863 docker.go:443] Stopping containers: [f801d20e772b 8a2e3663f108 edae9dfadbac 783d17f90a16 527d400886cc 767103da1d5d e43331c91963 3e820cb18787 e060e966d994 7f476af053b8 e67e8715523d 8ddde1e36d81 2c9d167d70ee 1a83bfbe4394 cdec1f476ee8 da6a2814ee31 15d0dc65e34d b0f20e17dd5d fc18a720e630 9daad105227a 62c19c0e4041 36d538ba658b 768f70fd0469 546c9408a011 5a2df0e93ea8 22c25e2aeb38 6ad23e433e26 89389da6974c]
I0915 13:15:11.616243 18863 ssh_runner.go:195] Run: docker stop f801d20e772b 8a2e3663f108 edae9dfadbac 783d17f90a16 527d400886cc 767103da1d5d e43331c91963 3e820cb18787 e060e966d994 7f476af053b8 e67e8715523d 8ddde1e36d81 2c9d167d70ee 1a83bfbe4394 cdec1f476ee8 da6a2814ee31 15d0dc65e34d b0f20e17dd5d fc18a720e630 9daad105227a 62c19c0e4041 36d538ba658b 768f70fd0469 546c9408a011 5a2df0e93ea8 22c25e2aeb38 6ad23e433e26 89389da6974c
I0915 13:15:11.640933 18863 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0915 13:15:11.648168 18863 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0915 13:15:11.652615 18863 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5523 Sep 15 19:33 /etc/kubernetes/admin.conf
-rw------- 1 root root 5536 Sep 15 19:50 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 1911 Sep 15 19:33 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5484 Sep 15 19:50 /etc/kubernetes/scheduler.conf
I0915 13:15:11.652834 18863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0915 13:15:11.657212 18863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0915 13:15:11.661561 18863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0915 13:15:11.665793 18863 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0915 13:15:11.666019 18863 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0915 13:15:11.670139 18863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0915 13:15:11.674280 18863 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0915 13:15:11.674487 18863 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0915 13:15:11.678691 18863 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0915 13:15:11.683175 18863 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0915 13:15:11.683186 18863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0915 13:15:11.712971 18863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0915 13:15:12.144616 18863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0915 13:15:12.227949 18863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0915 13:15:12.255550 18863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0915 13:15:12.283838 18863 api_server.go:51] waiting for apiserver process to appear ...
I0915 13:15:12.284087 18863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 13:15:12.791400 18863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 13:15:13.291255 18863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 13:15:13.300600 18863 api_server.go:71] duration metric: took 1.016772208s to wait for apiserver process to appear ...
I0915 13:15:13.300637 18863 api_server.go:87] waiting for apiserver healthz status ...
I0915 13:15:13.300714 18863 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53022/healthz ...
I0915 13:15:15.244091 18863 api_server.go:266] https://127.0.0.1:53022/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0915 13:15:15.244142 18863 api_server.go:102] status: https://127.0.0.1:53022/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0915 13:15:15.745335 18863 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53022/healthz ...
I0915 13:15:15.764523 18863 api_server.go:266] https://127.0.0.1:53022/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0915 13:15:15.764663 18863 api_server.go:102] status: https://127.0.0.1:53022/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0915 13:15:16.245332 18863 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53022/healthz ...
I0915 13:15:16.255644 18863 api_server.go:266] https://127.0.0.1:53022/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0915 13:15:16.255696 18863 api_server.go:102] status: https://127.0.0.1:53022/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0915 13:15:16.745264 18863 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53022/healthz ...
I0915 13:15:16.757629 18863 api_server.go:266] https://127.0.0.1:53022/healthz returned 200:
ok
I0915 13:15:16.770598 18863 api_server.go:140] control plane version: v1.25.1
I0915 13:15:16.770622 18863 api_server.go:130] duration metric: took 3.469997916s to wait for apiserver health ...
I0915 13:15:16.770632 18863 cni.go:95] Creating CNI manager for ""
I0915 13:15:16.770640 18863 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0915 13:15:16.770648 18863 system_pods.go:43] waiting for kube-system pods to appear ...
I0915 13:15:16.782417 18863 system_pods.go:59] 7 kube-system pods found
I0915 13:15:16.782448 18863 system_pods.go:61] "coredns-565d847f94-lqlmv" [6a580616-ffe8-4bac-9997-2b4c27619a38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0915 13:15:16.782453 18863 system_pods.go:61] "etcd-minikube" [3d2f847c-d84f-4f1a-9e72-1dfd36f008dd] Running
I0915 13:15:16.782456 18863 system_pods.go:61] "kube-apiserver-minikube" [92fc1955-98ed-4da9-8233-fce6530dba09] Running
I0915 13:15:16.782460 18863 system_pods.go:61] "kube-controller-manager-minikube" [64574519-e35f-41ef-9b58-025aa333ae61] Running
I0915 13:15:16.782464 18863 system_pods.go:61] "kube-proxy-l2gbn" [1b2eb2ea-1a72-4850-b33c-01210477fdd9] Running
I0915 13:15:16.782466 18863 system_pods.go:61] "kube-scheduler-minikube" [e3ac2246-0f6c-4b3e-b8f3-992a7e186b4a] Running
I0915 13:15:16.782468 18863 system_pods.go:61] "storage-provisioner" [1444c69c-faa0-4833-96e9-013784e920bf] Running
I0915 13:15:16.782603 18863 system_pods.go:74] duration metric: took 11.949ms to wait for pod list to return data ...
I0915 13:15:16.782615 18863 node_conditions.go:102] verifying NodePressure condition ...
I0915 13:15:16.784968 18863 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
I0915 13:15:16.784981 18863 node_conditions.go:123] node cpu capacity is 4
I0915 13:15:16.784988 18863 node_conditions.go:105] duration metric: took 2.358625ms to run NodePressure ...
I0915 13:15:16.784997 18863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0915 13:15:16.871935 18863 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0915 13:15:16.876343 18863 ops.go:34] apiserver oom_adj: -16
I0915 13:15:16.876375 18863 kubeadm.go:630] restartCluster took 8.4035085s
I0915 13:15:16.876391 18863 kubeadm.go:397] StartCluster complete in 8.427879875s
I0915 13:15:16.876423 18863 settings.go:142] acquiring lock: {Name:mka743d47d3c58ae040aab9cefe9497551e52d16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 13:15:16.876689 18863 settings.go:150] Updating kubeconfig: /Users/conradwt/.kube/config
I0915 13:15:16.881080 18863 lock.go:35] WriteFile acquiring /Users/conradwt/.kube/config: {Name:mkbc4267338bacd60a6afaa07281bd9df9ff204f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 13:15:16.883570 18863 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1
I0915 13:15:16.883602 18863 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.25.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0915 13:15:16.908588 18863 out.go:177] πŸ”Ž Verifying Kubernetes components...
πŸ”Ž Verifying Kubernetes components...
I0915 13:15:16.885205 18863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0915 13:15:16.885412 18863 addons.go:412] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
I0915 13:15:16.885620 18863 config.go:180] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.1
I0915 13:15:16.968115 18863 addons.go:65] Setting storage-provisioner=true in profile "minikube"
I0915 13:15:16.968137 18863 addons.go:153] Setting addon storage-provisioner=true in "minikube"
W0915 13:15:16.968142 18863 addons.go:162] addon storage-provisioner should already be in state true
I0915 13:15:16.968148 18863 addons.go:65] Setting default-storageclass=true in profile "minikube"
I0915 13:15:16.968187 18863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0915 13:15:16.968192 18863 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0915 13:15:16.968188 18863 host.go:66] Checking if "minikube" exists ...
I0915 13:15:16.968549 18863 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
I0915 13:15:16.968656 18863 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
I0915 13:15:17.026298 18863 start.go:789] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0915 13:15:17.026339 18863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0915 13:15:17.053207 18863 out.go:177] β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0915 13:15:17.031776 18863 addons.go:153] Setting addon default-storageclass=true in "minikube"
β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
W0915 13:15:17.053230 18863 addons.go:162] addon default-storageclass should already be in state true
I0915 13:15:17.076668 18863 host.go:66] Checking if "minikube" exists ...
I0915 13:15:17.076732 18863 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0915 13:15:17.076740 18863 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0915 13:15:17.076854 18863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0915 13:15:17.077187 18863 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
I0915 13:15:17.084595 18863 api_server.go:51] waiting for apiserver process to appear ...
I0915 13:15:17.084760 18863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 13:15:17.091312 18863 api_server.go:71] duration metric: took 207.689917ms to wait for apiserver process to appear ...
I0915 13:15:17.091333 18863 api_server.go:87] waiting for apiserver healthz status ...
I0915 13:15:17.091343 18863 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53022/healthz ...
I0915 13:15:17.095297 18863 api_server.go:266] https://127.0.0.1:53022/healthz returned 200:
ok
I0915 13:15:17.096038 18863 api_server.go:140] control plane version: v1.25.1
I0915 13:15:17.096044 18863 api_server.go:130] duration metric: took 4.7085ms to wait for apiserver health ...
I0915 13:15:17.096048 18863 system_pods.go:43] waiting for kube-system pods to appear ...
I0915 13:15:17.099743 18863 system_pods.go:59] 7 kube-system pods found
I0915 13:15:17.099758 18863 system_pods.go:61] "coredns-565d847f94-lqlmv" [6a580616-ffe8-4bac-9997-2b4c27619a38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0915 13:15:17.099763 18863 system_pods.go:61] "etcd-minikube" [3d2f847c-d84f-4f1a-9e72-1dfd36f008dd] Running
I0915 13:15:17.099766 18863 system_pods.go:61] "kube-apiserver-minikube" [92fc1955-98ed-4da9-8233-fce6530dba09] Running
I0915 13:15:17.099769 18863 system_pods.go:61] "kube-controller-manager-minikube" [64574519-e35f-41ef-9b58-025aa333ae61] Running
I0915 13:15:17.099771 18863 system_pods.go:61] "kube-proxy-l2gbn" [1b2eb2ea-1a72-4850-b33c-01210477fdd9] Running
I0915 13:15:17.099773 18863 system_pods.go:61] "kube-scheduler-minikube" [e3ac2246-0f6c-4b3e-b8f3-992a7e186b4a] Running
I0915 13:15:17.099775 18863 system_pods.go:61] "storage-provisioner" [1444c69c-faa0-4833-96e9-013784e920bf] Running
I0915 13:15:17.099778 18863 system_pods.go:74] duration metric: took 3.72725ms to wait for pod list to return data ...
I0915 13:15:17.099782 18863 kubeadm.go:572] duration metric: took 216.171292ms to wait for : map[apiserver:true system_pods:true] ...
I0915 13:15:17.099788 18863 node_conditions.go:102] verifying NodePressure condition ...
I0915 13:15:17.101577 18863 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
I0915 13:15:17.101588 18863 node_conditions.go:123] node cpu capacity is 4
I0915 13:15:17.101595 18863 node_conditions.go:105] duration metric: took 1.80425ms to run NodePressure ...
I0915 13:15:17.101603 18863 start.go:216] waiting for startup goroutines ...
I0915 13:15:17.131183 18863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53018 SSHKeyPath:/Users/conradwt/.minikube/machines/minikube/id_rsa Username:docker}
I0915 13:15:17.131855 18863 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
I0915 13:15:17.131864 18863 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0915 13:15:17.131969 18863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0915 13:15:17.183270 18863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53018 SSHKeyPath:/Users/conradwt/.minikube/machines/minikube/id_rsa Username:docker}
I0915 13:15:17.212183 18863 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0915 13:15:17.296149 18863 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0915 13:15:17.770289 18863 out.go:177] 🌟 Enabled addons: storage-provisioner, default-storageclass
🌟 Enabled addons: storage-provisioner, default-storageclass
I0915 13:15:17.789185 18863 addons.go:414] enableAddons completed in 903.730208ms
I0915 13:15:17.821171 18863 start.go:506] kubectl: 1.25.1, cluster: 1.25.1 (minor skew: 0)
I0915 13:15:17.844126 18863 out.go:177] πŸ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
πŸ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment