Skip to content

Instantly share code, notes, and snippets.

@Slach
Created December 24, 2019 09:42
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
Star You must be signed in to star a gist
Save Slach/7ec26f0956b0d00abf55d072229d3dbb to your computer and use it in GitHub Desktop.
minikube logs
* ==> Docker <==
* -- Logs begin at Tue 2019-12-24 09:36:37 UTC, end at Tue 2019-12-24 09:41:14 UTC. --
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.226032614Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.226276799Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.227344467Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.227435429Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.227552987Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.227587259Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.227615647Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.227639019Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.227660791Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.227684806Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.227707056Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.227730088Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.227752166Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.227842568Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.227872666Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.227899183Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.227922531Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.228261610Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.228332270Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.228353762Z" level=info msg="containerd successfully booted in 0.018525s"
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.247293225Z" level=info msg="parsed scheme: \"unix\"" module=grpc
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.247374245Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.247412072Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.247517342Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.249825810Z" level=info msg="parsed scheme: \"unix\"" module=grpc
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.249933268Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.249985560Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.250011766Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.300321584Z" level=warning msg="Your kernel does not support cgroup blkio weight"
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.300400989Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.300420004Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.300436779Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.300453357Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.300572572Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.300892791Z" level=info msg="Loading containers: start."
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.494255740Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.634267533Z" level=info msg="Loading containers: done."
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.729505630Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.729847022Z" level=info msg="Daemon has completed initialization"
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.781767263Z" level=info msg="API listen on /var/run/docker.sock"
* Dec 24 09:36:55 minikube dockerd[2452]: time="2019-12-24T09:36:55.782051045Z" level=info msg="API listen on [::]:2376"
* Dec 24 09:36:55 minikube systemd[1]: Started Docker Application Container Engine.
* Dec 24 09:38:51 minikube dockerd[2452]: time="2019-12-24T09:38:51.737082716Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c5e3d7531cde3baacf9343a3fe501524f3f35e5b90db263bc212063996feeda2/shim.sock" debug=false pid=3927
* Dec 24 09:38:51 minikube dockerd[2452]: time="2019-12-24T09:38:51.842011716Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a57c1285c9633d987f6e5b623d34c5e3a7c110669f9673e0bd76903ff5c2c841/shim.sock" debug=false pid=3948
* Dec 24 09:38:52 minikube dockerd[2452]: time="2019-12-24T09:38:52.259344699Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/366baececbaf4fd9a6628c494f57f7af5db442c91078ef4ae24e5cdfb1c65ccd/shim.sock" debug=false pid=4034
* Dec 24 09:38:52 minikube dockerd[2452]: time="2019-12-24T09:38:52.511790398Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d8231ba7bd408318c136d24649a998e60c59cfce6bbd8ccbb4bf109075c6a15b/shim.sock" debug=false pid=4108
* Dec 24 09:38:52 minikube dockerd[2452]: time="2019-12-24T09:38:52.937205845Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c21fc94ae69410f0d3c54705b5d46af420592111f04ab9ff6b48593290a0b79e/shim.sock" debug=false pid=4186
* Dec 24 09:38:52 minikube dockerd[2452]: time="2019-12-24T09:38:52.948989667Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/dac36cef723e8eab24a69108625baae4dc554949e2893eada374dfda40e40a74/shim.sock" debug=false pid=4187
* Dec 24 09:38:53 minikube dockerd[2452]: time="2019-12-24T09:38:53.037468589Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3ca1d13f7c1fb8295cd80ba07eb89fba50feaa9245fc8c5725a2d358af621be2/shim.sock" debug=false pid=4215
* Dec 24 09:38:53 minikube dockerd[2452]: time="2019-12-24T09:38:53.481805410Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8c0f645eb0d61c52fceed29be79835cb2cc3b55a08884c77d9bb2b2892a85096/shim.sock" debug=false pid=4297
* Dec 24 09:38:53 minikube dockerd[2452]: time="2019-12-24T09:38:53.639316019Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/54ea10bb8e6e541f0b178173b9f54cc2ecaf47b27390f4cb18cf5305d18df809/shim.sock" debug=false pid=4328
* Dec 24 09:38:54 minikube dockerd[2452]: time="2019-12-24T09:38:54.599203294Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/591a9c6d529921849a89b0208cc528881d912c01484bdf7d53b12255357fa197/shim.sock" debug=false pid=4453
* Dec 24 09:39:27 minikube dockerd[2452]: time="2019-12-24T09:39:27.589719623Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b618df2897fbc561121c21f3c0ddecbad1e7f0c3c020655bb10a5e04179675a7/shim.sock" debug=false pid=5152
* Dec 24 09:39:28 minikube dockerd[2452]: time="2019-12-24T09:39:28.405398966Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c1bc1a59a901d18ee66dd6781b7797f6995d2b471c5239e69bc84dcd6bc9392c/shim.sock" debug=false pid=5211
* Dec 24 09:39:31 minikube dockerd[2452]: time="2019-12-24T09:39:31.015052304Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2ce228d40327a32b859bedd420dbeff461bfcf4d6bfac46bcfae0fb3a62f5802/shim.sock" debug=false pid=5354
* Dec 24 09:39:31 minikube dockerd[2452]: time="2019-12-24T09:39:31.413019835Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3ebaaf4947a15d09b033e3fa937763e96865aa5c81ed5280ce60caaa2c1d63ec/shim.sock" debug=false pid=5400
* Dec 24 09:39:33 minikube dockerd[2452]: time="2019-12-24T09:39:33.248901205Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/be7f274cfa2669b013b9e368c5a8142d640614e4cf4380101f8ef19d82fd60c5/shim.sock" debug=false pid=5485
* Dec 24 09:39:34 minikube dockerd[2452]: time="2019-12-24T09:39:34.417256387Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0093d9418067878ba5aba61dc7f1c98abdb91edf0afef7892feeb0ec14b988e7/shim.sock" debug=false pid=5541
* Dec 24 09:39:34 minikube dockerd[2452]: time="2019-12-24T09:39:34.697260898Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e949fb1cc34138fed78b8ad140a5e457de27f1d886b0aa396e3583011ba31a5c/shim.sock" debug=false pid=5594
* Dec 24 09:39:35 minikube dockerd[2452]: time="2019-12-24T09:39:35.356359558Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d0f51ade89b90cc6d135b4e0cf2296674ab21d4d06449ac2fe80f698cb7b5fe4/shim.sock" debug=false pid=5676
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
* d0f51ade89b90 70f311871ae12 About a minute ago Running coredns 0 0093d94180678
* e949fb1cc3413 70f311871ae12 About a minute ago Running coredns 0 be7f274cfa266
* 3ebaaf4947a15 4689081edb103 About a minute ago Running storage-provisioner 0 2ce228d40327a
* c1bc1a59a901d 7d54289267dc5 About a minute ago Running kube-proxy 0 b618df2897fbc
* 591a9c6d52992 303ce5db0e90d 2 minutes ago Running etcd 0 c21fc94ae6941
* 54ea10bb8e6e5 78c190f736b11 2 minutes ago Running kube-scheduler 0 d8231ba7bd408
* 8c0f645eb0d61 5eb3b74868724 2 minutes ago Running kube-controller-manager 0 366baececbaf4
* 3ca1d13f7c1fb 0cae8d5cc64c7 2 minutes ago Running kube-apiserver 0 a57c1285c9633
* dac36cef723e8 bd12a212f9dcb 2 minutes ago Running kube-addon-manager 0 c5e3d7531cde3
*
* ==> coredns ["d0f51ade89b9"] <==
* .:53
* [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
* CoreDNS-1.6.5
* linux/amd64, go1.13.4, c2fd1b2
*
* ==> coredns ["e949fb1cc341"] <==
* .:53
* [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
* CoreDNS-1.6.5
* linux/amd64, go1.13.4, c2fd1b2
*
* ==> dmesg <==
* [ +5.002219] hpet1: lost 318 rtc interrupts
* [ +5.005757] hpet_rtc_timer_reinit: 69 callbacks suppressed
* [ +0.000003] hpet1: lost 319 rtc interrupts
* [Dec24 09:37] hpet1: lost 320 rtc interrupts
* [ +5.063353] hpet1: lost 322 rtc interrupts
* [ +5.002690] hpet1: lost 318 rtc interrupts
* [ +5.037178] hpet1: lost 320 rtc interrupts
* [ +5.025791] hpet1: lost 320 rtc interrupts
* [ +5.008885] hpet1: lost 319 rtc interrupts
* [ +4.998233] hpet1: lost 317 rtc interrupts
* [ +5.000727] hpet1: lost 318 rtc interrupts
* [ +5.001299] hpet1: lost 319 rtc interrupts
* [ +5.003865] hpet1: lost 318 rtc interrupts
* [ +5.000513] hpet1: lost 318 rtc interrupts
* [ +5.012602] hpet1: lost 319 rtc interrupts
* [Dec24 09:38] hpet1: lost 317 rtc interrupts
* [ +5.001923] hpet1: lost 318 rtc interrupts
* [ +5.001852] hpet1: lost 318 rtc interrupts
* [ +5.001920] hpet1: lost 319 rtc interrupts
* [ +5.002346] hpet1: lost 318 rtc interrupts
* [ +5.003005] hpet1: lost 318 rtc interrupts
* [ +2.695038] systemd-fstab-generator[3148]: Ignoring "noauto" for root device
* [ +2.307226] hpet1: lost 318 rtc interrupts
* [ +5.001427] hpet1: lost 318 rtc interrupts
* [ +5.002620] hpet1: lost 318 rtc interrupts
* [ +0.544113] systemd-fstab-generator[3491]: Ignoring "noauto" for root device
* [ +1.476049] NFSD: Unable to end grace period: -110
* [ +2.980445] hpet1: lost 318 rtc interrupts
* [ +10.004560] hpet_rtc_timer_reinit: 30 callbacks suppressed
* [ +0.000003] hpet1: lost 318 rtc interrupts
* [Dec24 09:39] hpet1: lost 318 rtc interrupts
* [ +5.000982] hpet1: lost 318 rtc interrupts
* [ +5.002651] hpet1: lost 318 rtc interrupts
* [ +5.001859] hpet1: lost 318 rtc interrupts
* [ +0.518234] systemd-fstab-generator[4862]: Ignoring "noauto" for root device
* [ +4.484543] hpet1: lost 318 rtc interrupts
* [ +5.005545] hpet1: lost 319 rtc interrupts
* [ +5.003615] hpet_rtc_timer_reinit: 39 callbacks suppressed
* [ +0.000002] hpet1: lost 318 rtc interrupts
* [ +5.001612] hpet1: lost 318 rtc interrupts
* [ +5.001181] hpet1: lost 318 rtc interrupts
* [ +5.003303] hpet_rtc_timer_reinit: 3 callbacks suppressed
* [ +0.000002] hpet1: lost 318 rtc interrupts
* [ +5.004352] hpet1: lost 318 rtc interrupts
* [ +4.999523] hpet1: lost 318 rtc interrupts
* [Dec24 09:40] hpet1: lost 319 rtc interrupts
* [ +5.001571] hpet1: lost 318 rtc interrupts
* [ +5.003389] hpet1: lost 318 rtc interrupts
* [ +5.000379] hpet1: lost 318 rtc interrupts
* [ +5.001033] hpet1: lost 318 rtc interrupts
* [ +5.003083] hpet1: lost 318 rtc interrupts
* [ +5.005740] hpet1: lost 318 rtc interrupts
* [ +5.002062] hpet1: lost 319 rtc interrupts
* [ +5.002699] hpet1: lost 318 rtc interrupts
* [ +5.001832] hpet1: lost 318 rtc interrupts
* [ +5.002335] hpet1: lost 318 rtc interrupts
* [ +5.002440] hpet1: lost 318 rtc interrupts
* [Dec24 09:41] hpet1: lost 319 rtc interrupts
* [ +5.003102] hpet1: lost 318 rtc interrupts
* [ +5.001734] hpet1: lost 318 rtc interrupts
*
* ==> kernel <==
* 09:41:14 up 5 min, 0 users, load average: 0.88, 1.31, 0.67
* Linux minikube 4.19.81 #1 SMP Tue Dec 10 16:09:50 PST 2019 x86_64 GNU/Linux
* PRETTY_NAME="Buildroot 2019.02.7"
*
* ==> kube-addon-manager ["dac36cef723e"] <==
* error: no objects passed to apply
* error: no objects passed to apply
* error: no objects passed to apply
* error: no objects passed to apply
* error: no objects passed to apply
* error: no objects passed to apply
* error: no objects passed to apply
* error: no objects passed to apply
* error: no objects passed to apply
* INFO: == Reconciling with addon-manager label ==
* serviceaccount/storage-provisioner unchanged
* INFO: == Kubernetes addon reconcile completed at 2019-12-24T09:40:34+00:00 ==
* INFO: Leader election disabled.
* INFO: == Kubernetes addon ensure completed at 2019-12-24T09:40:36+00:00 ==
* INFO: == Reconciling with deprecated label ==
* INFO: == Reconciling with addon-manager label ==
* serviceaccount/storage-provisioner unchanged
* INFO: == Kubernetes addon reconcile completed at 2019-12-24T09:40:39+00:00 ==
* INFO: Leader election disabled.
* INFO: == Kubernetes addon ensure completed at 2019-12-24T09:40:41+00:00 ==
* INFO: == Reconciling with deprecated label ==
* INFO: == Reconciling with addon-manager label ==
* serviceaccount/storage-provisioner unchanged
* INFO: == Kubernetes addon reconcile completed at 2019-12-24T09:40:43+00:00 ==
* INFO: Leader election disabled.
* INFO: == Kubernetes addon ensure completed at 2019-12-24T09:40:47+00:00 ==
* INFO: == Reconciling with deprecated label ==
* INFO: == Reconciling with addon-manager label ==
* serviceaccount/storage-provisioner unchanged
* INFO: == Kubernetes addon reconcile completed at 2019-12-24T09:40:49+00:00 ==
* INFO: Leader election disabled.
* INFO: == Kubernetes addon ensure completed at 2019-12-24T09:40:51+00:00 ==
* INFO: == Reconciling with deprecated label ==
* INFO: == Reconciling with addon-manager label ==
* serviceaccount/storage-provisioner unchanged
* INFO: == Kubernetes addon reconcile completed at 2019-12-24T09:40:54+00:00 ==
* INFO: Leader election disabled.
* INFO: == Kubernetes addon ensure completed at 2019-12-24T09:40:56+00:00 ==
* INFO: == Reconciling with deprecated label ==
* INFO: == Reconciling with addon-manager label ==
* serviceaccount/storage-provisioner unchanged
* INFO: == Kubernetes addon reconcile completed at 2019-12-24T09:40:58+00:00 ==
* INFO: Leader election disabled.
* INFO: == Kubernetes addon ensure completed at 2019-12-24T09:41:02+00:00 ==
* INFO: == Reconciling with deprecated label ==
* INFO: == Reconciling with addon-manager label ==
* serviceaccount/storage-provisioner unchanged
* INFO: == Kubernetes addon reconcile completed at 2019-12-24T09:41:04+00:00 ==
* INFO: Leader election disabled.
* INFO: == Kubernetes addon ensure completed at 2019-12-24T09:41:06+00:00 ==
* INFO: == Reconciling with deprecated label ==
* INFO: == Reconciling with addon-manager label ==
* serviceaccount/storage-provisioner unchanged
* INFO: == Kubernetes addon reconcile completed at 2019-12-24T09:41:08+00:00 ==
* INFO: Leader election disabled.
* INFO: == Kubernetes addon ensure completed at 2019-12-24T09:41:12+00:00 ==
* INFO: == Reconciling with deprecated label ==
* INFO: == Reconciling with addon-manager label ==
* serviceaccount/storage-provisioner unchanged
* INFO: == Kubernetes addon reconcile completed at 2019-12-24T09:41:14+00:00 ==
*
* ==> kube-apiserver ["3ca1d13f7c1f"] <==
* I1224 09:39:14.235122 1 secure_serving.go:178] Serving securely on [::]:8443
* I1224 09:39:14.235240 1 tlsconfig.go:219] Starting DynamicServingCertificateController
* I1224 09:39:14.236729 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
* I1224 09:39:14.237024 1 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller
* I1224 09:39:14.238937 1 autoregister_controller.go:140] Starting autoregister controller
* I1224 09:39:14.239123 1 cache.go:32] Waiting for caches to sync for autoregister controller
* I1224 09:39:14.239276 1 crd_finalizer.go:263] Starting CRDFinalizer
* I1224 09:39:14.240321 1 available_controller.go:386] Starting AvailableConditionController
* I1224 09:39:14.240545 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
* I1224 09:39:14.240830 1 controller.go:81] Starting OpenAPI AggregationController
* I1224 09:39:14.241051 1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
* I1224 09:39:14.241184 1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
* I1224 09:39:14.243993 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
* I1224 09:39:14.244042 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
* I1224 09:39:14.250922 1 log.go:172] http: TLS handshake error from 127.0.0.1:35798: EOF
* I1224 09:39:14.259101 1 log.go:172] http: TLS handshake error from 127.0.0.1:35800: EOF
* I1224 09:39:14.266097 1 log.go:172] http: TLS handshake error from 127.0.0.1:35802: EOF
* I1224 09:39:14.273303 1 log.go:172] http: TLS handshake error from 127.0.0.1:35804: EOF
* I1224 09:39:14.274635 1 crdregistration_controller.go:111] Starting crd-autoregister controller
* I1224 09:39:14.274955 1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
* I1224 09:39:14.275210 1 controller.go:85] Starting OpenAPI controller
* I1224 09:39:14.275388 1 customresource_discovery_controller.go:208] Starting DiscoveryController
* I1224 09:39:14.275641 1 naming_controller.go:288] Starting NamingConditionController
* I1224 09:39:14.276077 1 establishing_controller.go:73] Starting EstablishingController
* I1224 09:39:14.276236 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
* I1224 09:39:14.276357 1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
* I1224 09:39:14.280902 1 log.go:172] http: TLS handshake error from 127.0.0.1:35806: EOF
* I1224 09:39:14.288198 1 log.go:172] http: TLS handshake error from 127.0.0.1:35808: EOF
* E1224 09:39:14.297160 1 controller.go:151] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.99.112, ResourceVersion: 0, AdditionalErrorMsg:
* I1224 09:39:14.297719 1 log.go:172] http: TLS handshake error from 127.0.0.1:35810: EOF
* I1224 09:39:14.307260 1 log.go:172] http: TLS handshake error from 127.0.0.1:35812: EOF
* I1224 09:39:14.316266 1 log.go:172] http: TLS handshake error from 127.0.0.1:35814: EOF
* I1224 09:39:14.323756 1 log.go:172] http: TLS handshake error from 127.0.0.1:35818: EOF
* I1224 09:39:14.332661 1 log.go:172] http: TLS handshake error from 127.0.0.1:35820: EOF
* I1224 09:39:14.340385 1 log.go:172] http: TLS handshake error from 127.0.0.1:35830: EOF
* I1224 09:39:14.347318 1 log.go:172] http: TLS handshake error from 127.0.0.1:35832: EOF
* I1224 09:39:14.354173 1 log.go:172] http: TLS handshake error from 127.0.0.1:35834: EOF
* I1224 09:39:14.361145 1 log.go:172] http: TLS handshake error from 127.0.0.1:35838: EOF
* I1224 09:39:14.367469 1 log.go:172] http: TLS handshake error from 127.0.0.1:35972: EOF
* I1224 09:39:14.602958 1 shared_informer.go:204] Caches are synced for crd-autoregister
* I1224 09:39:14.649050 1 cache.go:39] Caches are synced for AvailableConditionController controller
* I1224 09:39:14.652267 1 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller
* I1224 09:39:14.652446 1 cache.go:39] Caches are synced for autoregister controller
* I1224 09:39:14.653087 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
* I1224 09:39:15.234028 1 controller.go:107] OpenAPI AggregationController: Processing item
* I1224 09:39:15.234098 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
* I1224 09:39:15.234119 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
* I1224 09:39:15.257818 1 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000
* I1224 09:39:15.281618 1 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000
* I1224 09:39:15.281680 1 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
* I1224 09:39:16.359952 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
* I1224 09:39:16.497191 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
* W1224 09:39:16.781555 1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.99.112]
* I1224 09:39:16.783267 1 controller.go:606] quota admission added evaluator for: endpoints
* I1224 09:39:16.808082 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
* I1224 09:39:19.086153 1 controller.go:606] quota admission added evaluator for: serviceaccounts
* I1224 09:39:19.196408 1 controller.go:606] quota admission added evaluator for: deployments.apps
* I1224 09:39:19.481732 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
* I1224 09:39:26.035212 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
* I1224 09:39:26.370986 1 controller.go:606] quota admission added evaluator for: replicasets.apps
*
* ==> kube-controller-manager ["8c0f645eb0d6"] <==
* I1224 09:39:24.850847 1 controllermanager.go:533] Started "deployment"
* I1224 09:39:24.852144 1 deployment_controller.go:152] Starting deployment controller
* I1224 09:39:24.852219 1 shared_informer.go:197] Waiting for caches to sync for deployment
* I1224 09:39:25.148138 1 controllermanager.go:533] Started "namespace"
* W1224 09:39:25.148218 1 controllermanager.go:525] Skipping "ttl-after-finished"
* W1224 09:39:25.148234 1 controllermanager.go:525] Skipping "endpointslice"
* I1224 09:39:25.149114 1 namespace_controller.go:200] Starting namespace controller
* I1224 09:39:25.149182 1 shared_informer.go:197] Waiting for caches to sync for namespace
* I1224 09:39:25.349729 1 controllermanager.go:533] Started "job"
* I1224 09:39:25.350498 1 job_controller.go:143] Starting job controller
* I1224 09:39:25.350551 1 shared_informer.go:197] Waiting for caches to sync for job
* I1224 09:39:25.599454 1 controllermanager.go:533] Started "bootstrapsigner"
* I1224 09:39:25.600176 1 shared_informer.go:197] Waiting for caches to sync for bootstrap_signer
* I1224 09:39:25.747838 1 node_lifecycle_controller.go:77] Sending events to api server
* E1224 09:39:25.747953 1 core.go:232] failed to start cloud node lifecycle controller: no cloud provider provided
* W1224 09:39:25.747975 1 controllermanager.go:525] Skipping "cloud-node-lifecycle"
* W1224 09:39:25.748003 1 controllermanager.go:525] Skipping "root-ca-cert-publisher"
* I1224 09:39:25.787232 1 shared_informer.go:197] Waiting for caches to sync for resource quota
* I1224 09:39:25.787916 1 shared_informer.go:197] Waiting for caches to sync for garbage collector
* I1224 09:39:25.835043 1 shared_informer.go:204] Caches are synced for bootstrap_signer
* I1224 09:39:25.976263 1 shared_informer.go:204] Caches are synced for certificate-csrsigning
* W1224 09:39:25.978943 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
* I1224 09:39:25.983880 1 shared_informer.go:204] Caches are synced for certificate-csrapproving
* I1224 09:39:25.987336 1 shared_informer.go:204] Caches are synced for TTL
* I1224 09:39:25.993941 1 shared_informer.go:204] Caches are synced for HPA
* I1224 09:39:25.994216 1 shared_informer.go:204] Caches are synced for ReplicaSet
* I1224 09:39:25.994875 1 shared_informer.go:204] Caches are synced for GC
* I1224 09:39:25.995021 1 shared_informer.go:204] Caches are synced for job
* I1224 09:39:25.995309 1 shared_informer.go:204] Caches are synced for daemon sets
* I1224 09:39:25.995650 1 shared_informer.go:204] Caches are synced for stateful set
* I1224 09:39:26.005585 1 shared_informer.go:204] Caches are synced for service account
* I1224 09:39:26.010817 1 shared_informer.go:204] Caches are synced for expand
* I1224 09:39:26.014074 1 shared_informer.go:204] Caches are synced for PVC protection
* I1224 09:39:26.033196 1 shared_informer.go:204] Caches are synced for ReplicationController
* I1224 09:39:26.049531 1 shared_informer.go:204] Caches are synced for namespace
* I1224 09:39:26.050727 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator
* I1224 09:39:26.089310 1 shared_informer.go:204] Caches are synced for PV protection
* I1224 09:39:26.089391 1 shared_informer.go:204] Caches are synced for persistent volume
* I1224 09:39:26.121672 1 shared_informer.go:204] Caches are synced for endpoint
* I1224 09:39:26.126128 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"b702cf34-c43f-4593-a602-3c0591994485", APIVersion:"apps/v1", ResourceVersion:"192", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-8pc2t
* I1224 09:39:26.160344 1 shared_informer.go:204] Caches are synced for taint
* I1224 09:39:26.160526 1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone:
* W1224 09:39:26.160618 1 node_lifecycle_controller.go:1058] Missing timestamp for Node minikube. Assuming now as a timestamp.
* I1224 09:39:26.160671 1 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
* I1224 09:39:26.161363 1 taint_manager.go:186] Starting NoExecuteTaintManager
* I1224 09:39:26.168714 1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"dc7cf109-daaa-4842-b44b-6f5cc8d915e4", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
* I1224 09:39:26.266594 1 shared_informer.go:204] Caches are synced for disruption
* I1224 09:39:26.266653 1 disruption.go:338] Sending events to api server.
* I1224 09:39:26.288499 1 shared_informer.go:204] Caches are synced for resource quota
* I1224 09:39:26.310158 1 shared_informer.go:204] Caches are synced for resource quota
* E1224 09:39:26.326864 1 daemon_controller.go:290] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"b702cf34-c43f-4593-a602-3c0591994485", ResourceVersion:"192", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712777159, loc:(*time.Location)(0x6b951c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00188ce40), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00186d8c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00188ce60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00188ce80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.17.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00188cec0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001794fa0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00187ea28), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"beta.kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00179ad80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00000f8f0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00187ea68)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
* I1224 09:39:26.355695 1 shared_informer.go:204] Caches are synced for attach detach
* I1224 09:39:26.365937 1 shared_informer.go:204] Caches are synced for garbage collector
* I1224 09:39:26.366089 1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
* I1224 09:39:26.366225 1 shared_informer.go:204] Caches are synced for deployment
* I1224 09:39:26.385033 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"ea378a2a-41a3-4e22-93ab-c98f5849b944", APIVersion:"apps/v1", ResourceVersion:"182", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-6955765f44 to 2
* I1224 09:39:26.388815 1 shared_informer.go:204] Caches are synced for garbage collector
* I1224 09:39:26.505567 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"f71999ae-e478-4186-ac15-5853ee2d8b1b", APIVersion:"apps/v1", ResourceVersion:"341", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-knn29
* I1224 09:39:26.552762 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"f71999ae-e478-4186-ac15-5853ee2d8b1b", APIVersion:"apps/v1", ResourceVersion:"341", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-55c2n
* I1224 09:39:31.161105 1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode.
*
* ==> kube-proxy ["c1bc1a59a901"] <==
* W1224 09:39:29.258093 1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
* I1224 09:39:29.293059 1 node.go:135] Successfully retrieved node IP: 192.168.99.112
* I1224 09:39:29.293458 1 server_others.go:145] Using iptables Proxier.
* W1224 09:39:29.293973 1 proxier.go:286] clusterCIDR not specified, unable to distinguish between internal and external traffic
* I1224 09:39:29.297017 1 server.go:571] Version: v1.17.0
* I1224 09:39:29.298218 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
* I1224 09:39:29.298856 1 conntrack.go:52] Setting nf_conntrack_max to 131072
* I1224 09:39:29.300365 1 conntrack.go:83] Setting conntrack hashsize to 32768
* I1224 09:39:29.305349 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
* I1224 09:39:29.308567 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
* I1224 09:39:29.310297 1 config.go:131] Starting endpoints config controller
* I1224 09:39:29.310337 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
* I1224 09:39:29.310423 1 config.go:313] Starting service config controller
* I1224 09:39:29.310432 1 shared_informer.go:197] Waiting for caches to sync for service config
* I1224 09:39:29.425960 1 shared_informer.go:204] Caches are synced for endpoints config
* I1224 09:39:29.425993 1 shared_informer.go:204] Caches are synced for service config
*
* ==> kube-scheduler ["54ea10bb8e6e"] <==
* I1224 09:38:58.330330 1 serving.go:312] Generated self-signed cert in-memory
* W1224 09:38:59.110651 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
* W1224 09:38:59.110779 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
* W1224 09:39:09.112752 1 authentication.go:296] Error looking up in-cluster authentication configuration: Get https://localhost:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: net/http: TLS handshake timeout
* W1224 09:39:09.113241 1 authentication.go:297] Continuing without authentication configuration. This may treat all requests as anonymous.
* W1224 09:39:09.113698 1 authentication.go:298] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
* W1224 09:39:14.548592 1 authorization.go:47] Authorization is disabled
* W1224 09:39:14.548618 1 authentication.go:92] Authentication is disabled
* I1224 09:39:14.548651 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
* I1224 09:39:14.583166 1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I1224 09:39:14.583235 1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I1224 09:39:14.591001 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
* I1224 09:39:14.591105 1 tlsconfig.go:219] Starting DynamicServingCertificateController
* E1224 09:39:14.663549 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
* E1224 09:39:14.664193 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
* E1224 09:39:14.665873 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
* E1224 09:39:14.666546 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
* E1224 09:39:14.670452 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
* E1224 09:39:14.673546 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
* E1224 09:39:14.674312 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
* E1224 09:39:14.676997 1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E1224 09:39:14.677600 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
* E1224 09:39:14.679800 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
* E1224 09:39:14.680399 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
* E1224 09:39:14.681542 1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* E1224 09:39:15.668650 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
* E1224 09:39:15.688001 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
* E1224 09:39:15.690062 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
* E1224 09:39:15.696282 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
* E1224 09:39:15.698877 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
* E1224 09:39:15.703033 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
* E1224 09:39:15.706545 1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E1224 09:39:15.707350 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
* E1224 09:39:15.707668 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
* E1224 09:39:15.708401 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
* E1224 09:39:15.710679 1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* E1224 09:39:15.713557 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
* I1224 09:39:16.783736 1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I1224 09:39:16.792968 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler...
* I1224 09:39:16.812246 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
* E1224 09:39:29.194023 1 factory.go:494] pod is already present in the activeQ
*
* ==> kubelet <==
* -- Logs begin at Tue 2019-12-24 09:36:37 UTC, end at Tue 2019-12-24 09:41:14 UTC. --
* Dec 24 09:39:19 minikube kubelet[4872]: I1224 09:39:19.619137 4872 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 <nil>}] <nil>}
* Dec 24 09:39:19 minikube kubelet[4872]: I1224 09:39:19.619774 4872 clientconn.go:577] ClientConn switching balancer to "pick_first"
* Dec 24 09:39:19 minikube kubelet[4872]: E1224 09:39:19.621515 4872 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
* Dec 24 09:39:19 minikube kubelet[4872]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
* Dec 24 09:39:19 minikube kubelet[4872]: I1224 09:39:19.670291 4872 kuberuntime_manager.go:211] Container runtime docker initialized, version: 19.03.5, apiVersion: 1.40.0
* Dec 24 09:39:19 minikube kubelet[4872]: I1224 09:39:19.730742 4872 server.go:1113] Started kubelet
* Dec 24 09:39:19 minikube kubelet[4872]: I1224 09:39:19.733852 4872 server.go:143] Starting to listen on 0.0.0.0:10250
* Dec 24 09:39:19 minikube kubelet[4872]: I1224 09:39:19.735845 4872 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
* Dec 24 09:39:19 minikube kubelet[4872]: I1224 09:39:19.738882 4872 server.go:354] Adding debug handlers to kubelet server.
* Dec 24 09:39:19 minikube kubelet[4872]: I1224 09:39:19.743970 4872 volume_manager.go:265] Starting Kubelet Volume Manager
* Dec 24 09:39:19 minikube kubelet[4872]: I1224 09:39:19.764727 4872 desired_state_of_world_populator.go:138] Desired state populator starts to run
* Dec 24 09:39:19 minikube kubelet[4872]: I1224 09:39:19.861948 4872 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
* Dec 24 09:39:20 minikube kubelet[4872]: I1224 09:39:20.051071 4872 status_manager.go:157] Starting to sync pod status with apiserver
* Dec 24 09:39:20 minikube kubelet[4872]: I1224 09:39:20.062209 4872 kubelet.go:1820] Starting kubelet main sync loop.
* Dec 24 09:39:20 minikube kubelet[4872]: E1224 09:39:20.064413 4872 kubelet.go:1844] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
* Dec 24 09:39:20 minikube kubelet[4872]: I1224 09:39:20.108553 4872 kubelet_node_status.go:70] Attempting to register node minikube
* Dec 24 09:39:20 minikube kubelet[4872]: E1224 09:39:20.194877 4872 kubelet.go:1844] skipping pod synchronization - container runtime status check may not have completed yet
* Dec 24 09:39:20 minikube kubelet[4872]: I1224 09:39:20.195605 4872 kubelet_node_status.go:112] Node minikube was previously registered
* Dec 24 09:39:20 minikube kubelet[4872]: I1224 09:39:20.196024 4872 kubelet_node_status.go:73] Successfully registered node minikube
* Dec 24 09:39:20 minikube kubelet[4872]: I1224 09:39:20.385457 4872 setters.go:535] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2019-12-24 09:39:20.385422547 +0000 UTC m=+1.706108436 LastTransitionTime:2019-12-24 09:39:20.385422547 +0000 UTC m=+1.706108436 Reason:KubeletNotReady Message:container runtime status check may not have completed yet}
* Dec 24 09:39:20 minikube kubelet[4872]: E1224 09:39:20.399727 4872 kubelet.go:1844] skipping pod synchronization - container runtime status check may not have completed yet
* Dec 24 09:39:20 minikube kubelet[4872]: I1224 09:39:20.752163 4872 cpu_manager.go:173] [cpumanager] starting with none policy
* Dec 24 09:39:20 minikube kubelet[4872]: I1224 09:39:20.752495 4872 cpu_manager.go:174] [cpumanager] reconciling every 10s
* Dec 24 09:39:20 minikube kubelet[4872]: I1224 09:39:20.752725 4872 policy_none.go:43] [cpumanager] none policy: Start
* Dec 24 09:39:20 minikube kubelet[4872]: I1224 09:39:20.757739 4872 plugin_manager.go:114] Starting Kubelet Plugin Manager
* Dec 24 09:39:20 minikube kubelet[4872]: I1224 09:39:20.915857 4872 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/23feff84cd825d99cde8955066ade45b-ca-certs") pod "kube-apiserver-minikube" (UID: "23feff84cd825d99cde8955066ade45b")
* Dec 24 09:39:20 minikube kubelet[4872]: I1224 09:39:20.916337 4872 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/23feff84cd825d99cde8955066ade45b-k8s-certs") pod "kube-apiserver-minikube" (UID: "23feff84cd825d99cde8955066ade45b")
* Dec 24 09:39:20 minikube kubelet[4872]: I1224 09:39:20.916874 4872 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/e7ce3a6ee9fa0ec547ac7b4b17af0dcb-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "e7ce3a6ee9fa0ec547ac7b4b17af0dcb")
* Dec 24 09:39:20 minikube kubelet[4872]: I1224 09:39:20.917069 4872 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/e7ce3a6ee9fa0ec547ac7b4b17af0dcb-k8s-certs") pod "kube-controller-manager-minikube" (UID: "e7ce3a6ee9fa0ec547ac7b4b17af0dcb")
* Dec 24 09:39:20 minikube kubelet[4872]: I1224 09:39:20.917249 4872 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "addons" (UniqueName: "kubernetes.io/host-path/c3e29047da86ce6690916750ab69c40b-addons") pod "kube-addon-manager-minikube" (UID: "c3e29047da86ce6690916750ab69c40b")
* Dec 24 09:39:20 minikube kubelet[4872]: I1224 09:39:20.917693 4872 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/c3e29047da86ce6690916750ab69c40b-kubeconfig") pod "kube-addon-manager-minikube" (UID: "c3e29047da86ce6690916750ab69c40b")
* Dec 24 09:39:20 minikube kubelet[4872]: I1224 09:39:20.917850 4872 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/69d383f2209bab2c02c146a411ebf2e8-etcd-certs") pod "etcd-minikube" (UID: "69d383f2209bab2c02c146a411ebf2e8")
* Dec 24 09:39:20 minikube kubelet[4872]: I1224 09:39:20.917925 4872 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/69d383f2209bab2c02c146a411ebf2e8-etcd-data") pod "etcd-minikube" (UID: "69d383f2209bab2c02c146a411ebf2e8")
* Dec 24 09:39:20 minikube kubelet[4872]: I1224 09:39:20.917982 4872 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/e7ce3a6ee9fa0ec547ac7b4b17af0dcb-kubeconfig") pod "kube-controller-manager-minikube" (UID: "e7ce3a6ee9fa0ec547ac7b4b17af0dcb")
* Dec 24 09:39:20 minikube kubelet[4872]: I1224 09:39:20.918033 4872 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/ff67867321338ffd885039e188f6b424-kubeconfig") pod "kube-scheduler-minikube" (UID: "ff67867321338ffd885039e188f6b424")
* Dec 24 09:39:20 minikube kubelet[4872]: I1224 09:39:20.918088 4872 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/23feff84cd825d99cde8955066ade45b-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "23feff84cd825d99cde8955066ade45b")
* Dec 24 09:39:20 minikube kubelet[4872]: I1224 09:39:20.918142 4872 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/e7ce3a6ee9fa0ec547ac7b4b17af0dcb-ca-certs") pod "kube-controller-manager-minikube" (UID: "e7ce3a6ee9fa0ec547ac7b4b17af0dcb")
* Dec 24 09:39:20 minikube kubelet[4872]: I1224 09:39:20.918197 4872 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/e7ce3a6ee9fa0ec547ac7b4b17af0dcb-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "e7ce3a6ee9fa0ec547ac7b4b17af0dcb")
* Dec 24 09:39:20 minikube kubelet[4872]: I1224 09:39:20.918225 4872 reconciler.go:156] Reconciler: start to sync state
* Dec 24 09:39:26 minikube kubelet[4872]: E1224 09:39:26.164799 4872 reflector.go:156] object-"kube-system"/"kube-proxy-token-2gfrh": Failed to list *v1.Secret: secrets "kube-proxy-token-2gfrh" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
* Dec 24 09:39:26 minikube kubelet[4872]: E1224 09:39:26.167080 4872 reflector.go:156] object-"kube-system"/"kube-proxy": Failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
* Dec 24 09:39:26 minikube kubelet[4872]: I1224 09:39:26.284358 4872 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/3849e82d-03c8-4e7e-8831-ba00a60b8d7f-lib-modules") pod "kube-proxy-8pc2t" (UID: "3849e82d-03c8-4e7e-8831-ba00a60b8d7f")
* Dec 24 09:39:26 minikube kubelet[4872]: I1224 09:39:26.284663 4872 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/3849e82d-03c8-4e7e-8831-ba00a60b8d7f-xtables-lock") pod "kube-proxy-8pc2t" (UID: "3849e82d-03c8-4e7e-8831-ba00a60b8d7f")
* Dec 24 09:39:26 minikube kubelet[4872]: I1224 09:39:26.284870 4872 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/3849e82d-03c8-4e7e-8831-ba00a60b8d7f-kube-proxy") pod "kube-proxy-8pc2t" (UID: "3849e82d-03c8-4e7e-8831-ba00a60b8d7f")
* Dec 24 09:39:26 minikube kubelet[4872]: I1224 09:39:26.285055 4872 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-2gfrh" (UniqueName: "kubernetes.io/secret/3849e82d-03c8-4e7e-8831-ba00a60b8d7f-kube-proxy-token-2gfrh") pod "kube-proxy-8pc2t" (UID: "3849e82d-03c8-4e7e-8831-ba00a60b8d7f")
* Dec 24 09:39:28 minikube kubelet[4872]: W1224 09:39:28.048846 4872 pod_container_deletor.go:75] Container "b618df2897fbc561121c21f3c0ddecbad1e7f0c3c020655bb10a5e04179675a7" not found in pod's containers
* Dec 24 09:39:30 minikube kubelet[4872]: I1224 09:39:30.736825 4872 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-vnxb2" (UniqueName: "kubernetes.io/secret/f4856b93-5456-4980-8fa0-0432b3bcea55-storage-provisioner-token-vnxb2") pod "storage-provisioner" (UID: "f4856b93-5456-4980-8fa0-0432b3bcea55")
* Dec 24 09:39:30 minikube kubelet[4872]: I1224 09:39:30.736992 4872 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/f4856b93-5456-4980-8fa0-0432b3bcea55-tmp") pod "storage-provisioner" (UID: "f4856b93-5456-4980-8fa0-0432b3bcea55")
* Dec 24 09:39:31 minikube kubelet[4872]: W1224 09:39:31.305874 4872 pod_container_deletor.go:75] Container "2ce228d40327a32b859bedd420dbeff461bfcf4d6bfac46bcfae0fb3a62f5802" not found in pod's containers
* Dec 24 09:39:32 minikube kubelet[4872]: I1224 09:39:32.685021 4872 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-t5qpg" (UniqueName: "kubernetes.io/secret/3742d5de-1a61-4072-8fe2-6a460f9b3afa-coredns-token-t5qpg") pod "coredns-6955765f44-knn29" (UID: "3742d5de-1a61-4072-8fe2-6a460f9b3afa")
* Dec 24 09:39:32 minikube kubelet[4872]: I1224 09:39:32.688443 4872 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3742d5de-1a61-4072-8fe2-6a460f9b3afa-config-volume") pod "coredns-6955765f44-knn29" (UID: "3742d5de-1a61-4072-8fe2-6a460f9b3afa")
* Dec 24 09:39:33 minikube kubelet[4872]: I1224 09:39:33.731492 4872 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8e07439c-58f2-4b6d-a58f-85cb2fff9b57-config-volume") pod "coredns-6955765f44-55c2n" (UID: "8e07439c-58f2-4b6d-a58f-85cb2fff9b57")
* Dec 24 09:39:33 minikube kubelet[4872]: I1224 09:39:33.731782 4872 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-t5qpg" (UniqueName: "kubernetes.io/secret/8e07439c-58f2-4b6d-a58f-85cb2fff9b57-coredns-token-t5qpg") pod "coredns-6955765f44-55c2n" (UID: "8e07439c-58f2-4b6d-a58f-85cb2fff9b57")
* Dec 24 09:39:34 minikube kubelet[4872]: W1224 09:39:34.566482 4872 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-knn29 through plugin: invalid network status for
* Dec 24 09:39:34 minikube kubelet[4872]: W1224 09:39:34.568821 4872 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-knn29 through plugin: invalid network status for
* Dec 24 09:39:34 minikube kubelet[4872]: W1224 09:39:34.569625 4872 pod_container_deletor.go:75] Container "be7f274cfa2669b013b9e368c5a8142d640614e4cf4380101f8ef19d82fd60c5" not found in pod's containers
* Dec 24 09:39:35 minikube kubelet[4872]: W1224 09:39:35.236931 4872 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-55c2n through plugin: invalid network status for
* Dec 24 09:39:35 minikube kubelet[4872]: W1224 09:39:35.602197 4872 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-knn29 through plugin: invalid network status for
* Dec 24 09:39:35 minikube kubelet[4872]: W1224 09:39:35.652178 4872 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-55c2n through plugin: invalid network status for
* Dec 24 09:39:36 minikube kubelet[4872]: W1224 09:39:36.761640 4872 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-55c2n through plugin: invalid network status for
*
* ==> storage-provisioner ["3ebaaf4947a1"] <==
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment