Skip to content

Instantly share code, notes, and snippets.

@lukasheinrich
Created May 26, 2019 18:14
Show Gist options
  • Save lukasheinrich/9804850552818b6cfc53906b7a7dcce9 to your computer and use it in GitHub Desktop.
Save lukasheinrich/9804850552818b6cfc53906b7a7dcce9 to your computer and use it in GitHub Desktop.
2019-05-26T18:13:05.551151501Z stderr F W0526 18:13:05.540783 1 server_others.go:267] Flag proxy-mode="" unknown, assuming iptables proxy
2019-05-26T18:13:05.587406293Z stderr F I0526 18:13:05.585832 1 server_others.go:146] Using iptables Proxier.
2019-05-26T18:13:05.587518904Z stderr F I0526 18:13:05.586173 1 server.go:562] Version: v1.14.2
2019-05-26T18:13:05.623931244Z stderr F I0526 18:13:05.623182 1 conntrack.go:52] Setting nf_conntrack_max to 131072
2019-05-26T18:13:05.634930768Z stderr F I0526 18:13:05.623275 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
2019-05-26T18:13:05.634957583Z stderr F I0526 18:13:05.623325 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
2019-05-26T18:13:05.635079138Z stderr F I0526 18:13:05.626696 1 config.go:202] Starting service config controller
2019-05-26T18:13:05.635087681Z stderr F I0526 18:13:05.626726 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
2019-05-26T18:13:05.638948754Z stderr F I0526 18:13:05.626770 1 config.go:102] Starting endpoints config controller
2019-05-26T18:13:05.638980117Z stderr F I0526 18:13:05.635189 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
2019-05-26T18:13:05.727150075Z stderr F I0526 18:13:05.726968 1 controller_utils.go:1034] Caches are synced for service config controller
2019-05-26T18:13:05.735922035Z stderr F I0526 18:13:05.735765 1 controller_utils.go:1034] Caches are synced for endpoints config controller
2019-05-26T18:13:34.412623745Z stdout F hostIP = 172.17.0.2
2019-05-26T18:13:34.41268209Z stdout F podIP = 172.17.0.2
2019-05-26T18:13:34.420492739Z stdout F Handling node with IP: 172.17.0.3
2019-05-26T18:13:34.420526372Z stdout F Node kind-control-plane has CIDR 10.244.0.0/24
2019-05-26T18:13:34.503835485Z stdout F Adding route {Ifindex: 0 Dst: 10.244.0.0/24 Src: <nil> Gw: 172.17.0.3 Flags: [] Table: 0}
2019-05-26T18:13:34.503892355Z stdout F Handling node with IP: 172.17.0.2
2019-05-26T18:13:34.503899772Z stdout F handling current node
2019-05-26T18:13:34.507852498Z stdout F Handling node with IP: 172.17.0.4
2019-05-26T18:13:34.507899684Z stdout F Node kind-worker2 has CIDR 10.244.2.0/24
2019-05-26T18:13:34.507907027Z stdout F Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.17.0.4 Flags: [] Table: 0}
2019-05-26T18:13:44.511715214Z stdout F Handling node with IP: 172.17.0.3
2019-05-26T18:13:44.511754403Z stdout F Node kind-control-plane has CIDR 10.244.0.0/24
2019-05-26T18:13:44.511761183Z stdout F Handling node with IP: 172.17.0.2
2019-05-26T18:13:44.511764648Z stdout F handling current node
2019-05-26T18:13:44.511770428Z stdout F Handling node with IP: 172.17.0.4
2019-05-26T18:13:44.511773338Z stdout F Node kind-worker2 has CIDR 10.244.2.0/24
2019-05-26T18:13:54.603658582Z stdout F Handling node with IP: 172.17.0.3
2019-05-26T18:13:54.603743638Z stdout F Node kind-control-plane has CIDR 10.244.0.0/24
2019-05-26T18:13:54.603797193Z stdout F Handling node with IP: 172.17.0.2
2019-05-26T18:13:54.603804431Z stdout F handling current node
2019-05-26T18:13:54.603810842Z stdout F Handling node with IP: 172.17.0.4
2019-05-26T18:13:54.603815714Z stdout F Node kind-worker2 has CIDR 10.244.2.0/24
-- Logs begin at Sun 2019-05-26 18:11:56 UTC, end at Sun 2019-05-26 18:14:02 UTC. --
May 26 18:11:56 kind-worker systemd[1]: Starting containerd container runtime...
May 26 18:11:56 kind-worker systemd[1]: Started containerd container runtime.
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.989263789Z" level=info msg="starting containerd" revision= version=1.2.6-0ubuntu1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.990007529Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.994376939Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.994465570Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.994597818Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.994770025Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.994905552Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.995071955Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.995162444Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.995226090Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.995311531Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.995381126Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.995439857Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.995496737Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.995562834Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.995690534Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.995784781Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.996368885Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.996487000Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.996615830Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.996680007Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.996737660Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.996799361Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.996870290Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.996931848Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.996986327Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.997039743Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.997111324Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.997277153Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.997340470Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.997396051Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.997456311Z" level=info msg="loading plugin "io.containerd.grpc.v1.cri"..." type=io.containerd.grpc.v1
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:56.997697766Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntime:{Type:io.containerd.runtime.v1.linux Engine: Root: Options:<nil>} UntrustedWorkloadRuntime:{Type: Engine: Root: Options:<nil>} Runtimes:map[] NoPivot:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Auths:map[]} StreamServerAddress:127.0.0.1 StreamServerPort:0 EnableSelinux:false SandboxImage:k8s.gcr.io/pause:3.1 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.003051760Z" level=info msg="Connect containerd service"
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.003281877Z" level=info msg="Get image filesystem path "/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs""
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.003546562Z" level=error msg="Failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.003917214Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.004235370Z" level=info msg=serving... address="/run/containerd/containerd.sock"
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.004328853Z" level=info msg="containerd successfully booted in 0.016052s"
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.043208662Z" level=info msg="Start subscribing containerd event"
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.047561519Z" level=info msg="Start recovering state"
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.059672311Z" level=warning msg="The image docker.io/kindest/kindnetd:0.1.0 is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.060879991Z" level=warning msg="The image k8s.gcr.io/coredns:1.3.1 is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.061861018Z" level=warning msg="The image k8s.gcr.io/etcd:3.3.10 is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.062758415Z" level=warning msg="The image k8s.gcr.io/ip-masq-agent:v2.4.1 is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.074981246Z" level=warning msg="The image k8s.gcr.io/kube-apiserver:v1.14.2 is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.075930982Z" level=warning msg="The image k8s.gcr.io/kube-controller-manager:v1.14.2 is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.076849709Z" level=warning msg="The image k8s.gcr.io/kube-proxy:v1.14.2 is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.082478316Z" level=warning msg="The image k8s.gcr.io/kube-scheduler:v1.14.2 is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.083226186Z" level=warning msg="The image k8s.gcr.io/pause:3.1 is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.083952001Z" level=warning msg="The image sha256:19bb968f77bba3a5b5f56b5c033d71f699c22bdc8bbe9412f0bfaf7f674a64cc is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.084690924Z" level=warning msg="The image sha256:1c93cc1335f8df0a96db1a773bb2851920fb574e1c9386f3960674279d5b978b is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.085352505Z" level=warning msg="The image sha256:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.086057992Z" level=warning msg="The image sha256:58f6abb9fb1b336348d3bb9dd80b5ecbc8dc963a3c1c20e778a0c20d3ed25344 is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.086667202Z" level=warning msg="The image sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.091904159Z" level=warning msg="The image sha256:e455634c173b0060e537f229155cbb1649d96945d8de54f3321eebd092d66a0c is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.092652169Z" level=warning msg="The image sha256:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.093336190Z" level=warning msg="The image sha256:f227066bdc5f9aa2f8a9bb54854e5b7a23c6db8fce0f927e5c4feef8a9e74d46 is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.093857166Z" level=info msg="Start event monitor"
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.093978132Z" level=info msg="Start snapshots syncer"
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.094248167Z" level=info msg="Start streaming server"
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.094194985Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:19bb968f77bba3a5b5f56b5c033d71f699c22bdc8bbe9412f0bfaf7f674a64cc,Labels:map[string]string{io.cri-containerd.image: managed,},}"
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.095205652Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/ip-masq-agent:v2.4.1,Labels:map[string]string{io.cri-containerd.image: managed,},}"
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.095818016Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7387c4b88e2df50ccca4f6f8167992605cfe50d0075a647b5ab5187378ac2bd8,Labels:map[string]string{io.cri-containerd.image: managed,},}"
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.096588200Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/kube-proxy:v1.14.2,Labels:map[string]string{io.cri-containerd.image: managed,},}"
May 26 18:12:31 kind-worker containerd[46]: time="2019-05-26T18:12:31.786335741Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
May 26 18:12:48 kind-worker containerd[46]: time="2019-05-26T18:12:48.262263971Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
May 26 18:12:53 kind-worker containerd[46]: time="2019-05-26T18:12:53.433725012Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
May 26 18:12:58 kind-worker containerd[46]: time="2019-05-26T18:12:58.434854145Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
May 26 18:13:01 kind-worker containerd[46]: time="2019-05-26T18:13:01.111586003Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
May 26 18:13:01 kind-worker containerd[46]: time="2019-05-26T18:13:01.112168294Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
May 26 18:13:01 kind-worker containerd[46]: time="2019-05-26T18:13:01.485210404Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-btr2x,Uid:e5be4844-7fe1-11e9-83f7-0242ac110003,Namespace:kube-system,Attempt:0,}"
May 26 18:13:01 kind-worker containerd[46]: time="2019-05-26T18:13:01.513026252Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:ip-masq-agent-bq6xs,Uid:e5d062ca-7fe1-11e9-83f7-0242ac110003,Namespace:kube-system,Attempt:0,}"
May 26 18:13:01 kind-worker containerd[46]: time="2019-05-26T18:13:01.527235700Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-ntf8l,Uid:e5bf0d1f-7fe1-11e9-83f7-0242ac110003,Namespace:kube-system,Attempt:0,}"
May 26 18:13:01 kind-worker containerd[46]: time="2019-05-26T18:13:01.579774350Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/c4ebde15572dbb2f4e972c168511db0431bc8553fcb5bc4e8a519daa8fb2520d/shim.sock" debug=false pid=193
May 26 18:13:01 kind-worker containerd[46]: time="2019-05-26T18:13:01.604123032Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/afbf973e46f02ec5f2a9579f24f61281a485a21587c137c1a37247c63bb5c21d/shim.sock" debug=false pid=212
May 26 18:13:01 kind-worker containerd[46]: time="2019-05-26T18:13:01.649363377Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/74a0314e32537bf951b15c6cafff401eafb06634112dac5d3f37f5ab800e3c53/shim.sock" debug=false pid=230
May 26 18:13:01 kind-worker containerd[46]: time="2019-05-26T18:13:01.860006893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-btr2x,Uid:e5be4844-7fe1-11e9-83f7-0242ac110003,Namespace:kube-system,Attempt:0,} returns sandbox id "afbf973e46f02ec5f2a9579f24f61281a485a21587c137c1a37247c63bb5c21d""
May 26 18:13:01 kind-worker containerd[46]: time="2019-05-26T18:13:01.873146635Z" level=info msg="CreateContainer within sandbox "afbf973e46f02ec5f2a9579f24f61281a485a21587c137c1a37247c63bb5c21d" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
May 26 18:13:01 kind-worker containerd[46]: time="2019-05-26T18:13:01.932239610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:ip-masq-agent-bq6xs,Uid:e5d062ca-7fe1-11e9-83f7-0242ac110003,Namespace:kube-system,Attempt:0,} returns sandbox id "c4ebde15572dbb2f4e972c168511db0431bc8553fcb5bc4e8a519daa8fb2520d""
May 26 18:13:01 kind-worker containerd[46]: time="2019-05-26T18:13:01.941955877Z" level=info msg="CreateContainer within sandbox "c4ebde15572dbb2f4e972c168511db0431bc8553fcb5bc4e8a519daa8fb2520d" for container &ContainerMetadata{Name:ip-masq-agent,Attempt:0,}"
May 26 18:13:02 kind-worker containerd[46]: time="2019-05-26T18:13:02.102819413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-ntf8l,Uid:e5bf0d1f-7fe1-11e9-83f7-0242ac110003,Namespace:kube-system,Attempt:0,} returns sandbox id "74a0314e32537bf951b15c6cafff401eafb06634112dac5d3f37f5ab800e3c53""
May 26 18:13:02 kind-worker containerd[46]: time="2019-05-26T18:13:02.153346509Z" level=info msg="CreateContainer within sandbox "74a0314e32537bf951b15c6cafff401eafb06634112dac5d3f37f5ab800e3c53" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
May 26 18:13:03 kind-worker containerd[46]: time="2019-05-26T18:13:03.236498653Z" level=info msg="CreateContainer within sandbox "74a0314e32537bf951b15c6cafff401eafb06634112dac5d3f37f5ab800e3c53" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id "dd6482736f138d2ff6c3aad2e0690d564adc0a5849655d555eba1d48700911b2""
May 26 18:13:03 kind-worker containerd[46]: time="2019-05-26T18:13:03.284285412Z" level=info msg="StartContainer for "dd6482736f138d2ff6c3aad2e0690d564adc0a5849655d555eba1d48700911b2""
May 26 18:13:03 kind-worker containerd[46]: time="2019-05-26T18:13:03.287049577Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/dd6482736f138d2ff6c3aad2e0690d564adc0a5849655d555eba1d48700911b2/shim.sock" debug=false pid=359
May 26 18:13:03 kind-worker containerd[46]: time="2019-05-26T18:13:03.467012087Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
May 26 18:13:03 kind-worker containerd[46]: time="2019-05-26T18:13:03.831779196Z" level=info msg="StartContainer for "dd6482736f138d2ff6c3aad2e0690d564adc0a5849655d555eba1d48700911b2" returns successfully"
May 26 18:13:04 kind-worker containerd[46]: time="2019-05-26T18:13:04.520658348Z" level=info msg="CreateContainer within sandbox "c4ebde15572dbb2f4e972c168511db0431bc8553fcb5bc4e8a519daa8fb2520d" for &ContainerMetadata{Name:ip-masq-agent,Attempt:0,} returns container id "bdcacad189d30de4dcf28a1b4607519be2f08ba9757e6d8680aaf18df9cdc6a3""
May 26 18:13:04 kind-worker containerd[46]: time="2019-05-26T18:13:04.523057760Z" level=info msg="StartContainer for "bdcacad189d30de4dcf28a1b4607519be2f08ba9757e6d8680aaf18df9cdc6a3""
May 26 18:13:04 kind-worker containerd[46]: time="2019-05-26T18:13:04.526601864Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/bdcacad189d30de4dcf28a1b4607519be2f08ba9757e6d8680aaf18df9cdc6a3/shim.sock" debug=false pid=412
May 26 18:13:04 kind-worker containerd[46]: time="2019-05-26T18:13:04.971394474Z" level=info msg="StartContainer for "bdcacad189d30de4dcf28a1b4607519be2f08ba9757e6d8680aaf18df9cdc6a3" returns successfully"
May 26 18:13:05 kind-worker containerd[46]: time="2019-05-26T18:13:05.103711799Z" level=info msg="CreateContainer within sandbox "afbf973e46f02ec5f2a9579f24f61281a485a21587c137c1a37247c63bb5c21d" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id "aa210783339eb5684ec4dd2e93f9d48c52893c9fbab1f6e40be87a0e4bff9fcb""
May 26 18:13:05 kind-worker containerd[46]: time="2019-05-26T18:13:05.104530513Z" level=info msg="StartContainer for "aa210783339eb5684ec4dd2e93f9d48c52893c9fbab1f6e40be87a0e4bff9fcb""
May 26 18:13:05 kind-worker containerd[46]: time="2019-05-26T18:13:05.105690740Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/aa210783339eb5684ec4dd2e93f9d48c52893c9fbab1f6e40be87a0e4bff9fcb/shim.sock" debug=false pid=467
May 26 18:13:05 kind-worker containerd[46]: time="2019-05-26T18:13:05.451105059Z" level=info msg="StartContainer for "aa210783339eb5684ec4dd2e93f9d48c52893c9fbab1f6e40be87a0e4bff9fcb" returns successfully"
May 26 18:13:08 kind-worker containerd[46]: time="2019-05-26T18:13:08.595617544Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
May 26 18:13:13 kind-worker containerd[46]: time="2019-05-26T18:13:13.597156379Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
May 26 18:13:18 kind-worker containerd[46]: time="2019-05-26T18:13:18.598350917Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
May 26 18:13:23 kind-worker containerd[46]: time="2019-05-26T18:13:23.599748753Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
May 26 18:13:28 kind-worker containerd[46]: time="2019-05-26T18:13:28.601260025Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
May 26 18:13:33 kind-worker containerd[46]: time="2019-05-26T18:13:33.602895744Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
May 26 18:13:33 kind-worker containerd[46]: time="2019-05-26T18:13:33.823361052Z" level=info msg="Finish piping stderr of container "dd6482736f138d2ff6c3aad2e0690d564adc0a5849655d555eba1d48700911b2""
May 26 18:13:33 kind-worker containerd[46]: time="2019-05-26T18:13:33.823433789Z" level=info msg="Finish piping stdout of container "dd6482736f138d2ff6c3aad2e0690d564adc0a5849655d555eba1d48700911b2""
May 26 18:13:33 kind-worker containerd[46]: time="2019-05-26T18:13:33.859498828Z" level=info msg="TaskExit event &TaskExit{ContainerID:dd6482736f138d2ff6c3aad2e0690d564adc0a5849655d555eba1d48700911b2,ID:dd6482736f138d2ff6c3aad2e0690d564adc0a5849655d555eba1d48700911b2,Pid:378,ExitStatus:2,ExitedAt:2019-05-26 18:13:33.823895051 +0000 UTC,}"
May 26 18:13:33 kind-worker containerd[46]: time="2019-05-26T18:13:33.905400278Z" level=info msg="shim reaped" id=dd6482736f138d2ff6c3aad2e0690d564adc0a5849655d555eba1d48700911b2
May 26 18:13:34 kind-worker containerd[46]: time="2019-05-26T18:13:34.060682049Z" level=info msg="CreateContainer within sandbox "74a0314e32537bf951b15c6cafff401eafb06634112dac5d3f37f5ab800e3c53" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
May 26 18:13:34 kind-worker containerd[46]: time="2019-05-26T18:13:34.115186291Z" level=info msg="CreateContainer within sandbox "74a0314e32537bf951b15c6cafff401eafb06634112dac5d3f37f5ab800e3c53" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id "8c4d33757e16c8ba5f40623ad20c050b2e8acef701913cc15f5f7100c8d0ad38""
May 26 18:13:34 kind-worker containerd[46]: time="2019-05-26T18:13:34.116148002Z" level=info msg="StartContainer for "8c4d33757e16c8ba5f40623ad20c050b2e8acef701913cc15f5f7100c8d0ad38""
May 26 18:13:34 kind-worker containerd[46]: time="2019-05-26T18:13:34.117074579Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/8c4d33757e16c8ba5f40623ad20c050b2e8acef701913cc15f5f7100c8d0ad38/shim.sock" debug=false pid=634
May 26 18:13:34 kind-worker containerd[46]: time="2019-05-26T18:13:34.397934300Z" level=info msg="StartContainer for "8c4d33757e16c8ba5f40623ad20c050b2e8acef701913cc15f5f7100c8d0ad38" returns successfully"
May 26 18:13:50 kind-worker containerd[46]: time="2019-05-26T18:13:50.102626775Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:hello-6d6586c69c-z62fd,Uid:e8bf84f9-7fe1-11e9-83f7-0242ac110003,Namespace:default,Attempt:0,}"
May 26 18:13:50 kind-worker containerd[46]: time="2019-05-26T18:13:50.169168605Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/d02416a779d2e18fd0903eecfd151cb270e1398fd2f26ccbea841aaa74dbfb61/shim.sock" debug=false pid=756
May 26 18:13:50 kind-worker containerd[46]: time="2019-05-26T18:13:50.321808570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:hello-6d6586c69c-z62fd,Uid:e8bf84f9-7fe1-11e9-83f7-0242ac110003,Namespace:default,Attempt:0,} returns sandbox id "d02416a779d2e18fd0903eecfd151cb270e1398fd2f26ccbea841aaa74dbfb61""
May 26 18:13:50 kind-worker containerd[46]: time="2019-05-26T18:13:50.324694756Z" level=info msg="PullImage "alpine:latest""
May 26 18:13:51 kind-worker containerd[46]: time="2019-05-26T18:13:51.472373222Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/library/alpine:latest,Labels:map[string]string{},}"
May 26 18:13:51 kind-worker containerd[46]: time="2019-05-26T18:13:51.480542026Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:055936d3920576da37aa9bc460d70c5f212028bda1c08c0879aedf03d7a66ea1,Labels:map[string]string{io.cri-containerd.image: managed,},}"
May 26 18:13:51 kind-worker containerd[46]: time="2019-05-26T18:13:51.481257780Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/alpine:latest,Labels:map[string]string{io.cri-containerd.image: managed,},}"
May 26 18:13:51 kind-worker containerd[46]: time="2019-05-26T18:13:51.631930032Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:055936d3920576da37aa9bc460d70c5f212028bda1c08c0879aedf03d7a66ea1,Labels:map[string]string{io.cri-containerd.image: managed,},}"
May 26 18:13:51 kind-worker containerd[46]: time="2019-05-26T18:13:51.633416846Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/alpine:latest,Labels:map[string]string{io.cri-containerd.image: managed,},}"
May 26 18:13:51 kind-worker containerd[46]: time="2019-05-26T18:13:51.639428372Z" level=info msg="PullImage "alpine:latest" returns image reference "sha256:055936d3920576da37aa9bc460d70c5f212028bda1c08c0879aedf03d7a66ea1""
May 26 18:13:51 kind-worker containerd[46]: time="2019-05-26T18:13:51.639625825Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/library/alpine@sha256:769fddc7cc2f0a1c35abb2f91432e8beecf83916c421420e6a6da9f8975464b6,Labels:map[string]string{io.cri-containerd.image: managed,},}"
May 26 18:13:51 kind-worker containerd[46]: time="2019-05-26T18:13:51.641864830Z" level=info msg="CreateContainer within sandbox "d02416a779d2e18fd0903eecfd151cb270e1398fd2f26ccbea841aaa74dbfb61" for container &ContainerMetadata{Name:hello,Attempt:0,}"
May 26 18:13:51 kind-worker containerd[46]: time="2019-05-26T18:13:51.699340924Z" level=info msg="CreateContainer within sandbox "d02416a779d2e18fd0903eecfd151cb270e1398fd2f26ccbea841aaa74dbfb61" for &ContainerMetadata{Name:hello,Attempt:0,} returns container id "b0bc1a1cb776b3cfd164c412fa850453414cae4c9b084f3c90b4f14b68df6e7d""
May 26 18:13:51 kind-worker containerd[46]: time="2019-05-26T18:13:51.700121136Z" level=info msg="StartContainer for "b0bc1a1cb776b3cfd164c412fa850453414cae4c9b084f3c90b4f14b68df6e7d""
May 26 18:13:51 kind-worker containerd[46]: time="2019-05-26T18:13:51.701069220Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/b0bc1a1cb776b3cfd164c412fa850453414cae4c9b084f3c90b4f14b68df6e7d/shim.sock" debug=false pid=812
May 26 18:13:51 kind-worker containerd[46]: time="2019-05-26T18:13:51.860954184Z" level=info msg="StartContainer for "b0bc1a1cb776b3cfd164c412fa850453414cae4c9b084f3c90b4f14b68df6e7d" returns successfully"
May 26 18:13:52 kind-worker containerd[46]: time="2019-05-26T18:13:52.125427681Z" level=info msg="Attach for "b0bc1a1cb776b3cfd164c412fa850453414cae4c9b084f3c90b4f14b68df6e7d" with tty true and stdin true"
May 26 18:13:52 kind-worker containerd[46]: time="2019-05-26T18:13:52.125539133Z" level=info msg="Attach for "b0bc1a1cb776b3cfd164c412fa850453414cae4c9b084f3c90b4f14b68df6e7d" returns URL "http://127.0.0.1:45391/attach/C3KQSoVo""
May 26 18:14:01 kind-worker containerd[46]: time="2019-05-26T18:14:01.868417524Z" level=info msg="Finish piping stdout of container "b0bc1a1cb776b3cfd164c412fa850453414cae4c9b084f3c90b4f14b68df6e7d""
May 26 18:14:01 kind-worker containerd[46]: time="2019-05-26T18:14:01.868472846Z" level=info msg="Attach stream "b0bc1a1cb776b3cfd164c412fa850453414cae4c9b084f3c90b4f14b68df6e7d-attach-8b8504e51bd29f191fca68333687e4288c5eb596701a04931f99ffecd2ca661d-stdout" closed"
May 26 18:14:01 kind-worker containerd[46]: time="2019-05-26T18:14:01.870223959Z" level=info msg="Attach stream "b0bc1a1cb776b3cfd164c412fa850453414cae4c9b084f3c90b4f14b68df6e7d-attach-8b8504e51bd29f191fca68333687e4288c5eb596701a04931f99ffecd2ca661d-stdin" closed"
May 26 18:14:01 kind-worker containerd[46]: time="2019-05-26T18:14:01.923707837Z" level=info msg="TaskExit event &TaskExit{ContainerID:b0bc1a1cb776b3cfd164c412fa850453414cae4c9b084f3c90b4f14b68df6e7d,ID:b0bc1a1cb776b3cfd164c412fa850453414cae4c9b084f3c90b4f14b68df6e7d,Pid:829,ExitStatus:127,ExitedAt:2019-05-26 18:14:01.867665909 +0000 UTC,}"
May 26 18:14:01 kind-worker containerd[46]: time="2019-05-26T18:14:01.993627385Z" level=info msg="shim reaped" id=b0bc1a1cb776b3cfd164c412fa850453414cae4c9b084f3c90b4f14b68df6e7d
May 26 18:14:02 kind-worker containerd[46]: time="2019-05-26T18:14:02.110581383Z" level=info msg="PullImage "alpine:latest""
2019-05-26T18:13:28.749927924Z stdout F .:53
2019-05-26T18:13:28.749976597Z stdout F 2019-05-26T18:13:28.749Z [INFO] CoreDNS-1.3.1
2019-05-26T18:13:28.749982386Z stdout F 2019-05-26T18:13:28.749Z [INFO] linux/amd64, go1.11.4, 6b56a9c
2019-05-26T18:13:28.74998777Z stdout F CoreDNS-1.3.1
2019-05-26T18:13:28.749991301Z stdout F linux/amd64, go1.11.4, 6b56a9c
2019-05-26T18:13:28.750280593Z stdout F 2019-05-26T18:13:28.749Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669
2019-05-26T18:13:35.75193083Z stdout F 2019-05-26T18:13:35.751Z [ERROR] plugin/errors: 2 6079531268458449042.6382902342908365222. HINFO: read udp 10.244.0.3:36138->169.254.169.254:53: i/o timeout
2019-05-26T18:13:38.752318636Z stdout F 2019-05-26T18:13:38.752Z [ERROR] plugin/errors: 2 6079531268458449042.6382902342908365222. HINFO: read udp 10.244.0.3:40910->169.254.169.254:53: i/o timeout
2019-05-26T18:13:39.752476871Z stdout F 2019-05-26T18:13:39.752Z [ERROR] plugin/errors: 2 6079531268458449042.6382902342908365222. HINFO: read udp 10.244.0.3:39931->169.254.169.254:53: i/o timeout
2019-05-26T18:13:40.752435788Z stdout F 2019-05-26T18:13:40.752Z [ERROR] plugin/errors: 2 6079531268458449042.6382902342908365222. HINFO: read udp 10.244.0.3:45028->169.254.169.254:53: i/o timeout
2019-05-26T18:13:43.753038699Z stdout F 2019-05-26T18:13:43.752Z [ERROR] plugin/errors: 2 6079531268458449042.6382902342908365222. HINFO: read udp 10.244.0.3:37695->169.254.169.254:53: i/o timeout
2019-05-26T18:13:46.753737253Z stdout F 2019-05-26T18:13:46.753Z [ERROR] plugin/errors: 2 6079531268458449042.6382902342908365222. HINFO: read udp 10.244.0.3:45611->169.254.169.254:53: i/o timeout
2019-05-26T18:13:49.754252742Z stdout F 2019-05-26T18:13:49.754Z [ERROR] plugin/errors: 2 6079531268458449042.6382902342908365222. HINFO: read udp 10.244.0.3:51465->169.254.169.254:53: i/o timeout
2019-05-26T18:13:52.754670575Z stdout F 2019-05-26T18:13:52.754Z [ERROR] plugin/errors: 2 6079531268458449042.6382902342908365222. HINFO: read udp 10.244.0.3:45520->169.254.169.254:53: i/o timeout
2019-05-26T18:13:53.845235137Z stdout F 2019-05-26T18:13:53.845Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.travis-ci-prod-2.internal. A: read udp 10.244.0.3:53706->169.254.169.254:53: i/o timeout
2019-05-26T18:13:53.845562985Z stdout F 2019-05-26T18:13:53.845Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.travis-ci-prod-2.internal. AAAA: read udp 10.244.0.3:50146->169.254.169.254:53: i/o timeout
2019-05-26T18:13:55.755219346Z stdout F 2019-05-26T18:13:55.755Z [ERROR] plugin/errors: 2 6079531268458449042.6382902342908365222. HINFO: read udp 10.244.0.3:32928->169.254.169.254:53: i/o timeout
2019-05-26T18:13:55.846429246Z stdout F 2019-05-26T18:13:55.846Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.travis-ci-prod-2.internal. AAAA: read udp 10.244.0.3:42562->169.254.169.254:53: i/o timeout
2019-05-26T18:13:55.846649758Z stdout F 2019-05-26T18:13:55.846Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.travis-ci-prod-2.internal. A: read udp 10.244.0.3:36592->169.254.169.254:53: i/o timeout
2019-05-26T18:13:56.345957771Z stdout F 2019-05-26T18:13:56.345Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.travis-ci-prod-2.internal. A: read udp 10.244.0.3:53672->169.254.169.254:53: i/o timeout
2019-05-26T18:13:56.346004992Z stdout F 2019-05-26T18:13:56.345Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.travis-ci-prod-2.internal. AAAA: read udp 10.244.0.3:58355->169.254.169.254:53: i/o timeout
2019-05-26T18:13:57.847276895Z stdout F 2019-05-26T18:13:57.846Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.travis-ci-prod-2.internal. AAAA: read udp 10.244.0.3:34470->169.254.169.254:53: i/o timeout
2019-05-26T18:13:57.847583834Z stdout F 2019-05-26T18:13:57.847Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.travis-ci-prod-2.internal. A: read udp 10.244.0.3:37705->169.254.169.254:53: i/o timeout
2019-05-26T18:13:58.346675806Z stdout F 2019-05-26T18:13:58.346Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.travis-ci-prod-2.internal. A: read udp 10.244.0.3:34662->169.254.169.254:53: i/o timeout
2019-05-26T18:13:58.34701167Z stdout F 2019-05-26T18:13:58.346Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.travis-ci-prod-2.internal. AAAA: read udp 10.244.0.3:33587->169.254.169.254:53: i/o timeout
2019-05-26T18:13:58.755599176Z stdout F 2019-05-26T18:13:58.755Z [ERROR] plugin/errors: 2 6079531268458449042.6382902342908365222. HINFO: read udp 10.244.0.3:51055->169.254.169.254:53: i/o timeout
2019-05-26T18:13:58.849261295Z stdout F 2019-05-26T18:13:58.849Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.travis-ci-prod-2.internal. A: read udp 10.244.0.3:52106->169.254.169.254:53: i/o timeout
2019-05-26T18:13:58.849603297Z stdout F 2019-05-26T18:13:58.849Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.travis-ci-prod-2.internal. AAAA: read udp 10.244.0.3:39460->169.254.169.254:53: i/o timeout
2019-05-26T18:14:00.850476998Z stdout F 2019-05-26T18:14:00.850Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.travis-ci-prod-2.internal. AAAA: read udp 10.244.0.3:38366->169.254.169.254:53: i/o timeout
2019-05-26T18:14:00.850627465Z stdout F 2019-05-26T18:14:00.850Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.travis-ci-prod-2.internal. A: read udp 10.244.0.3:39509->169.254.169.254:53: i/o timeout
2019-05-26T18:14:01.352135586Z stdout F 2019-05-26T18:14:01.351Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.travis-ci-prod-2.internal. AAAA: read udp 10.244.0.3:40963->169.254.169.254:53: i/o timeout
2019-05-26T18:14:01.352276483Z stdout F 2019-05-26T18:14:01.351Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.travis-ci-prod-2.internal. A: read udp 10.244.0.3:51120->169.254.169.254:53: i/o timeout
2019-05-26T18:13:28.127875393Z stdout F .:53
2019-05-26T18:13:28.127949764Z stdout F 2019-05-26T18:13:28.126Z [INFO] CoreDNS-1.3.1
2019-05-26T18:13:28.12799236Z stdout F 2019-05-26T18:13:28.126Z [INFO] linux/amd64, go1.11.4, 6b56a9c
2019-05-26T18:13:28.127999076Z stdout F CoreDNS-1.3.1
2019-05-26T18:13:28.128003348Z stdout F linux/amd64, go1.11.4, 6b56a9c
2019-05-26T18:13:28.128008356Z stdout F 2019-05-26T18:13:28.126Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669
2019-05-26T18:13:34.133214948Z stdout F 2019-05-26T18:13:34.133Z [ERROR] plugin/errors: 2 4291165517175029779.5191498156786763370. HINFO: read udp 10.244.0.2:46564->169.254.169.254:53: i/o timeout
2019-05-26T18:13:37.130087645Z stdout F 2019-05-26T18:13:37.129Z [ERROR] plugin/errors: 2 4291165517175029779.5191498156786763370. HINFO: read udp 10.244.0.2:57112->169.254.169.254:53: i/o timeout
2019-05-26T18:13:38.133854794Z stdout F 2019-05-26T18:13:38.133Z [ERROR] plugin/errors: 2 4291165517175029779.5191498156786763370. HINFO: read udp 10.244.0.2:36886->169.254.169.254:53: i/o timeout
2019-05-26T18:13:39.133804597Z stdout F 2019-05-26T18:13:39.133Z [ERROR] plugin/errors: 2 4291165517175029779.5191498156786763370. HINFO: read udp 10.244.0.2:58046->169.254.169.254:53: i/o timeout
2019-05-26T18:13:42.134692401Z stdout F 2019-05-26T18:13:42.134Z [ERROR] plugin/errors: 2 4291165517175029779.5191498156786763370. HINFO: read udp 10.244.0.2:52640->169.254.169.254:53: i/o timeout
2019-05-26T18:13:45.135097397Z stdout F 2019-05-26T18:13:45.134Z [ERROR] plugin/errors: 2 4291165517175029779.5191498156786763370. HINFO: read udp 10.244.0.2:43087->169.254.169.254:53: i/o timeout
2019-05-26T18:13:48.135812744Z stdout F 2019-05-26T18:13:48.135Z [ERROR] plugin/errors: 2 4291165517175029779.5191498156786763370. HINFO: read udp 10.244.0.2:36724->169.254.169.254:53: i/o timeout
2019-05-26T18:13:51.136715515Z stdout F 2019-05-26T18:13:51.136Z [ERROR] plugin/errors: 2 4291165517175029779.5191498156786763370. HINFO: read udp 10.244.0.2:35405->169.254.169.254:53: i/o timeout
2019-05-26T18:13:54.136916822Z stdout F 2019-05-26T18:13:54.136Z [ERROR] plugin/errors: 2 4291165517175029779.5191498156786763370. HINFO: read udp 10.244.0.2:42698->169.254.169.254:53: i/o timeout
2019-05-26T18:13:57.137694091Z stdout F 2019-05-26T18:13:57.137Z [ERROR] plugin/errors: 2 4291165517175029779.5191498156786763370. HINFO: read udp 10.244.0.2:55380->169.254.169.254:53: i/o timeout
Containers: 3
Running: 3
Paused: 0
Stopped: 0
Images: 2
Server Version: 17.09.0-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 06b9cb35161009dcb7123345749fef02f7cea8e0
runc version: 3f2f8b84a77f73d38244dd690525642a72156c64
init version: 949e6fa
Kernel Version: 4.4.0-101-generic
Operating System: Ubuntu 14.04.5 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 7.305GiB
Name: travis-job-38863038-4ba8-45ce-9733-77f730b948e2
ID: DH3M:23FP:35CF:LCVT:ROBH:CV5W:C5W2:JSP4:7G7W:NH4L:6FOS:WJOW
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
2019-05-26T18:12:21.124447689Z stderr F 2019-05-26 18:12:21.124245 I | etcdmain: etcd Version: 3.3.10
2019-05-26T18:12:21.124568223Z stderr F 2019-05-26 18:12:21.124530 I | etcdmain: Git SHA: 27fc7e2
2019-05-26T18:12:21.124622842Z stderr F 2019-05-26 18:12:21.124593 I | etcdmain: Go Version: go1.10.4
2019-05-26T18:12:21.124682651Z stderr F 2019-05-26 18:12:21.124653 I | etcdmain: Go OS/Arch: linux/amd64
2019-05-26T18:12:21.124728142Z stderr F 2019-05-26 18:12:21.124700 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2019-05-26T18:12:21.1248744Z stderr F 2019-05-26 18:12:21.124833 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, ca = , trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file =
2019-05-26T18:12:21.126321832Z stderr F 2019-05-26 18:12:21.126224 I | embed: listening for peers on https://172.17.0.3:2380
2019-05-26T18:12:21.126457949Z stderr F 2019-05-26 18:12:21.126413 I | embed: listening for client requests on 127.0.0.1:2379
2019-05-26T18:12:21.126553145Z stderr F 2019-05-26 18:12:21.126502 I | embed: listening for client requests on 172.17.0.3:2379
2019-05-26T18:12:21.132867697Z stderr F 2019-05-26 18:12:21.132730 I | etcdserver: name = kind-control-plane
2019-05-26T18:12:21.1328999Z stderr F 2019-05-26 18:12:21.132756 I | etcdserver: data dir = /var/lib/etcd
2019-05-26T18:12:21.132905408Z stderr F 2019-05-26 18:12:21.132763 I | etcdserver: member dir = /var/lib/etcd/member
2019-05-26T18:12:21.13291975Z stderr F 2019-05-26 18:12:21.132766 I | etcdserver: heartbeat = 100ms
2019-05-26T18:12:21.13292437Z stderr F 2019-05-26 18:12:21.132769 I | etcdserver: election = 1000ms
2019-05-26T18:12:21.13292924Z stderr F 2019-05-26 18:12:21.132773 I | etcdserver: snapshot count = 10000
2019-05-26T18:12:21.132933634Z stderr F 2019-05-26 18:12:21.132784 I | etcdserver: advertise client URLs = https://172.17.0.3:2379
2019-05-26T18:12:21.132939954Z stderr F 2019-05-26 18:12:21.132789 I | etcdserver: initial advertise peer URLs = https://172.17.0.3:2380
2019-05-26T18:12:21.132944831Z stderr F 2019-05-26 18:12:21.132797 I | etcdserver: initial cluster = kind-control-plane=https://172.17.0.3:2380
2019-05-26T18:12:21.137830939Z stderr F 2019-05-26 18:12:21.137683 I | etcdserver: starting member b273bc7741bcb020 in cluster 86482fea2286a1d2
2019-05-26T18:12:21.137865251Z stderr F 2019-05-26 18:12:21.137737 I | raft: b273bc7741bcb020 became follower at term 0
2019-05-26T18:12:21.137872199Z stderr F 2019-05-26 18:12:21.137747 I | raft: newRaft b273bc7741bcb020 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2019-05-26T18:12:21.137887798Z stderr F 2019-05-26 18:12:21.137752 I | raft: b273bc7741bcb020 became follower at term 1
2019-05-26T18:12:21.153899209Z stderr F 2019-05-26 18:12:21.153734 W | auth: simple token is not cryptographically signed
2019-05-26T18:12:21.15875841Z stderr F 2019-05-26 18:12:21.158628 I | etcdserver: starting server... [version: 3.3.10, cluster version: to_be_decided]
2019-05-26T18:12:21.162046531Z stderr F 2019-05-26 18:12:21.161958 I | embed: ClientTLS: cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.key, ca = , trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file =
2019-05-26T18:12:21.162465449Z stderr F 2019-05-26 18:12:21.162405 I | etcdserver: b273bc7741bcb020 as single-node; fast-forwarding 9 ticks (election ticks 10)
2019-05-26T18:12:21.163329174Z stderr F 2019-05-26 18:12:21.163260 I | etcdserver/membership: added member b273bc7741bcb020 [https://172.17.0.3:2380] to cluster 86482fea2286a1d2
2019-05-26T18:12:21.838721618Z stderr F 2019-05-26 18:12:21.838183 I | raft: b273bc7741bcb020 is starting a new election at term 1
2019-05-26T18:12:21.838754276Z stderr F 2019-05-26 18:12:21.838300 I | raft: b273bc7741bcb020 became candidate at term 2
2019-05-26T18:12:21.838759906Z stderr F 2019-05-26 18:12:21.838332 I | raft: b273bc7741bcb020 received MsgVoteResp from b273bc7741bcb020 at term 2
2019-05-26T18:12:21.838782467Z stderr F 2019-05-26 18:12:21.838412 I | raft: b273bc7741bcb020 became leader at term 2
2019-05-26T18:12:21.838811633Z stderr F 2019-05-26 18:12:21.838422 I | raft: raft.node: b273bc7741bcb020 elected leader b273bc7741bcb020 at term 2
2019-05-26T18:12:21.838817312Z stderr F 2019-05-26 18:12:21.838574 I | etcdserver: setting up the initial cluster version to 3.3
2019-05-26T18:12:21.849839455Z stderr F 2019-05-26 18:12:21.839971 N | etcdserver/membership: set the initial cluster version to 3.3
2019-05-26T18:12:21.84988047Z stderr F 2019-05-26 18:12:21.840020 I | etcdserver: published {Name:kind-control-plane ClientURLs:[https://172.17.0.3:2379]} to cluster 86482fea2286a1d2
2019-05-26T18:12:21.849896009Z stderr F 2019-05-26 18:12:21.840027 I | etcdserver/api: enabled capabilities for version 3.3
2019-05-26T18:12:21.849901838Z stderr F 2019-05-26 18:12:21.840045 I | embed: ready to serve client requests
2019-05-26T18:12:21.849906963Z stderr F 2019-05-26 18:12:21.840206 I | embed: ready to serve client requests
2019-05-26T18:12:21.849910973Z stderr F 2019-05-26 18:12:21.842402 I | embed: serving client requests on 127.0.0.1:2379
2019-05-26T18:12:21.849915299Z stderr F 2019-05-26 18:12:21.842486 I | embed: serving client requests on 172.17.0.3:2379
2019-05-26T18:12:25.912947485Z stderr F proto: no coders for int
2019-05-26T18:12:25.913032352Z stderr F proto: no encoder for ValueSize int [GetProperties]
2019-05-26T18:12:46.409080244Z stderr F 2019-05-26 18:12:46.408924 W | etcdserver: read-only range request "key:\"foo\" " with result "range_response_count:0 size:5" took too long (126.593756ms) to execute
2019-05-26T18:12:53.43817967Z stderr F 2019-05-26 18:12:53.438020 W | etcdserver: read-only range request "key:\"/registry/minions/kind-worker\" " with result "range_response_count:0 size:5" took too long (122.382827ms) to execute
2019-05-26T18:12:53.438540957Z stderr F 2019-05-26 18:12:53.438454 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "range_response_count:1 size:440" took too long (451.409338ms) to execute
2019-05-26T18:12:54.710042251Z stderr F 2019-05-26 18:12:54.709880 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (114.453968ms) to execute
2019-05-26T18:12:54.710624577Z stderr F 2019-05-26 18:12:54.710539 W | etcdserver: read-only range request "key:\"/registry/minions/kind-worker2\" " with result "range_response_count:0 size:5" took too long (346.702834ms) to execute
2019-05-26T18:12:54.711061111Z stderr F 2019-05-26 18:12:54.710995 W | etcdserver: read-only range request "key:\"/registry/minions/kind-worker\" " with result "range_response_count:0 size:5" took too long (395.357133ms) to execute
2019-05-26T18:12:55.492350384Z stderr F 2019-05-26 18:12:55.492095 W | etcdserver: read-only range request "key:\"/registry/minions/kind-worker\" " with result "range_response_count:0 size:5" took too long (176.461392ms) to execute
2019-05-26T18:12:55.492832733Z stderr F 2019-05-26 18:12:55.492746 W | etcdserver: read-only range request "key:\"/registry/minions/kind-worker2\" " with result "range_response_count:0 size:5" took too long (129.186559ms) to execute
2019-05-26T18:13:36.554009293Z stderr F 2019-05-26 18:13:36.547386 W | etcdserver: read-only range request "key:\"foo\" " with result "range_response_count:0 size:5" took too long (273.753242ms) to execute
2019-05-26T18:13:36.554020988Z stderr F 2019-05-26 18:13:36.547479 W | etcdserver: read-only range request "key:\"/registry/podsecuritypolicy\" range_end:\"/registry/podsecuritypolicz\" count_only:true " with result "range_response_count:0 size:7" took too long (491.646017ms) to execute
2019-05-26T18:13:36.554031065Z stderr F 2019-05-26 18:13:36.547796 W | etcdserver: read-only range request "key:\"/registry/secrets\" range_end:\"/registry/secrett\" count_only:true " with result "range_response_count:0 size:7" took too long (460.773727ms) to execute
2019-05-26T18:13:36.554034794Z stderr F 2019-05-26 18:13:36.548025 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/kind-control-plane\" " with result "range_response_count:1 size:327" took too long (454.807097ms) to execute
2019-05-26T18:13:36.554053707Z stderr F 2019-05-26 18:13:36.548096 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "range_response_count:1 size:440" took too long (747.593016ms) to execute
2019-05-26T18:13:36.554056975Z stderr F 2019-05-26 18:13:36.548468 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kindnet-xcdxm\" " with result "range_response_count:1 size:2142" took too long (926.622549ms) to execute
2019-05-26T18:13:37.574369131Z stderr F 2019-05-26 18:13:37.574199 W | etcdserver: request "header:<ID:12691261351904679893 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kindnet-xcdxm\" mod_revision:632 > success:<request_put:<key:\"/registry/pods/kube-system/kindnet-xcdxm\" value_size:1963 >> failure:<request_range:<key:\"/registry/pods/kube-system/kindnet-xcdxm\" > >>" with result "size:16" took too long (839.761973ms) to execute
2019-05-26T18:13:37.575204787Z stderr F 2019-05-26 18:13:37.575047 W | etcdserver: read-only range request "key:\"/registry/controllerrevisions\" range_end:\"/registry/controllerrevisiont\" count_only:true " with result "range_response_count:0 size:7" took too long (739.371952ms) to execute
2019-05-26T18:13:57.764443857Z stderr F 2019-05-26 18:13:57.764217 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:458" took too long (105.914968ms) to execute
2019-05-26T18:13:51.846537838Z stdout F fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/APKINDEX.tar.gz
2019-05-26T18:13:56.845869271Z stdout F ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.9/main: temporary error (try again later)
2019-05-26T18:13:56.845917552Z stdout F WARNING: Ignoring APKINDEX.b89edf6e.tar.gz: No such file or directory
2019-05-26T18:13:56.845924865Z stdout F fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/community/x86_64/APKINDEX.tar.gz
2019-05-26T18:14:01.852552828Z stdout F ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.9/community: temporary error (try again later)
2019-05-26T18:14:01.8526736Z stdout F WARNING: Ignoring APKINDEX.737f7e01.tar.gz: No such file or directory
2019-05-26T18:14:01.852821184Z stdout F ERROR: unsatisfiable constraints:
2019-05-26T18:14:01.852830173Z stdout F curl (missing):
2019-05-26T18:14:01.852835723Z stdout F required by: world[curl]
2019-05-26T18:14:01.854809699Z stdout F sh: curl: not found
[
{
"Id": "537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473",
"Created": "2019-05-26T18:11:09.750781149Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 6707,
"ExitCode": 0,
"Error": "",
"StartedAt": "2019-05-26T18:11:56.143657107Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:9fb4c7da1d9fc73c8269e69578511a792e4a18040992c7ee70b63f67169e85d7",
"ResolvConfPath": "/var/lib/docker/containers/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/hostname",
"HostsPath": "/var/lib/docker/containers/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/hosts",
"LogPath": "/var/lib/docker/containers/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473-json.log",
"Name": "/kind-worker",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": [
"322d50c8eb997c3b2447acd26682e90a54316d1bacac010111dd7db971d76afb"
],
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "default",
"PortBindings": {},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "shareable",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DiskQuota": 0,
"KernelMemory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": 0,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/380273b40b2f7a95e549ca4342fe6a4a412358456ab3ac8e6293b01829b7cbda-init/diff:/var/lib/docker/overlay2/ee0c1aa4acaf53b51a5e52a58c705b2c8e567a2b6c6f125c65ce3699660125f7/diff:/var/lib/docker/overlay2/2747d608d768e4f1bd18e9f5d7e8d114a65320b7847d7e6ed1c1d64782602f7a/diff:/var/lib/docker/overlay2/7d0d51acdecd201bac9bb9cfeacf36a58c3faeb830ea80ab8d7c53ceca57f3ad/diff:/var/lib/docker/overlay2/dc76aeb4de26f635e95d16d75dcfac6b776e545c12cc305422f6a38b650c0c13/diff:/var/lib/docker/overlay2/8a935a8e56e18c85dca0ff936d1368c85b707dff37c62c6f55704ccf587cd6c7/diff:/var/lib/docker/overlay2/93884106f135445630de12c14f061be62fe1bd2cb73b3d513f8d275589a4bab0/diff:/var/lib/docker/overlay2/c7a6e007715509d993e1639f91e100387c8a2ab54259e6b2cc82709af250e0d7/diff:/var/lib/docker/overlay2/2e576a288b96f4776f172e6c84c3cc29adcdbcb2fa1834ae25f653182c8d4539/diff:/var/lib/docker/overlay2/eb6b59f9955eed342d9ac58569222c19eb89e8031e94db3139f01ed7ba7f3962/diff:/var/lib/docker/overlay2/e879b99cc1de6c1f575d76d3adce17af946885035513c603b5f5f0daf97d3c7a/diff:/var/lib/docker/overlay2/db8fc75eddea0824ab3571289bf505e4571e017bed42cef2c94be55336f67e58/diff:/var/lib/docker/overlay2/8a15a68e538fb1ea01a55f39f1c7db52f250ec96425645a9546c9746e02416cf/diff:/var/lib/docker/overlay2/d6dccbab8cad94360518f39e3b42aab3e0747a6f392d710f3b43469a84613bd3/diff",
"MergedDir": "/var/lib/docker/overlay2/380273b40b2f7a95e549ca4342fe6a4a412358456ab3ac8e6293b01829b7cbda/merged",
"UpperDir": "/var/lib/docker/overlay2/380273b40b2f7a95e549ca4342fe6a4a412358456ab3ac8e6293b01829b7cbda/diff",
"WorkDir": "/var/lib/docker/overlay2/380273b40b2f7a95e549ca4342fe6a4a412358456ab3ac8e6293b01829b7cbda/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "a739f232b064a8ad8ba541a1ca79eecacd065533331824e83b8b6b8f9b6c9f9a",
"Source": "/var/lib/docker/volumes/a739f232b064a8ad8ba541a1ca79eecacd065533331824e83b8b6b8f9b6c9f9a/_data",
"Destination": "/var/lib/containerd",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "kind-worker",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"container=docker"
],
"Cmd": null,
"Image": "kindest/node:v1.14.2@sha256:33539d830a6cf20e3e0a75d0c46a4e94730d78c7375435e6b49833d81448c319",
"Volumes": {
"/var/lib/containerd": {}
},
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"io.k8s.sigs.kind.build": "2019-05-16T17:59:19.46503226-07:00",
"io.k8s.sigs.kind.cluster": "kind",
"io.k8s.sigs.kind.role": "worker"
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "2af32071e93e704abdaebd247b45b0b2898974d8b8016f9e2c9d9b9fd3ab29c5",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": "/var/run/docker/netns/2af32071e93e",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "1dc3bb8dab5e0f5255653bf64f5bb2ab91c794b83bdfed2ecec8faa110af8081",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:11:00:02",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "50a93fb4a8620f6034e4e65fff61c0f956cc39c202dbd8d7d00900cf6858bd06",
"EndpointID": "1dc3bb8dab5e0f5255653bf64f5bb2ab91c794b83bdfed2ecec8faa110af8081",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:02",
"DriverOpts": null
}
}
}
}
]
-- Logs begin at Sun 2019-05-26 18:11:56 UTC, end at Sun 2019-05-26 18:14:02 UTC. --
May 26 18:11:56 kind-worker systemd-journald[38]: Journal started
May 26 18:11:56 kind-worker systemd-journald[38]: Runtime journal (/run/log/journal/1a7c76218b624a98a56255feff911c07) is 8.0M, max 373.9M, 365.9M free.
May 26 18:11:56 kind-worker systemd-sysctl[37]: Couldn't write 'fq_codel' to 'net/core/default_qdisc', ignoring: No such file or directory
May 26 18:11:56 kind-worker systemd-sysusers[39]: Creating group systemd-coredump with gid 999.
May 26 18:11:56 kind-worker systemd-sysusers[39]: Creating user systemd-coredump (systemd Core Dumper) with uid 999 and gid 999.
May 26 18:11:56 kind-worker systemd[1]: Starting Flush Journal to Persistent Storage...
May 26 18:11:56 kind-worker systemd[1]: Started Create Static Device Nodes in /dev.
May 26 18:11:56 kind-worker systemd[1]: Condition check resulted in udev Kernel Device Manager being skipped.
May 26 18:11:56 kind-worker systemd[1]: Reached target System Initialization.
May 26 18:11:56 kind-worker systemd[1]: Started Daily Cleanup of Temporary Directories.
May 26 18:11:56 kind-worker systemd[1]: Reached target Timers.
May 26 18:11:56 kind-worker systemd[1]: Reached target Basic System.
May 26 18:11:56 kind-worker systemd[1]: Starting containerd container runtime...
May 26 18:11:56 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent.
May 26 18:11:56 kind-worker systemd-journald[38]: Runtime journal (/run/log/journal/1a7c76218b624a98a56255feff911c07) is 8.0M, max 373.9M, 365.9M free.
May 26 18:11:56 kind-worker systemd[1]: Started Flush Journal to Persistent Storage.
May 26 18:11:56 kind-worker systemd[1]: Started containerd container runtime.
May 26 18:11:56 kind-worker systemd[1]: Reached target Multi-User System.
May 26 18:11:56 kind-worker systemd[1]: Reached target Graphical Interface.
May 26 18:11:56 kind-worker systemd[1]: Starting Update UTMP about System Runlevel Changes...
May 26 18:11:56 kind-worker systemd[1]: systemd-update-utmp-runlevel.service: Succeeded.
May 26 18:11:56 kind-worker systemd[1]: Started Update UTMP about System Runlevel Changes.
May 26 18:11:56 kind-worker systemd[1]: Startup finished in 549ms.
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.989263789Z" level=info msg="starting containerd" revision= version=1.2.6-0ubuntu1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.990007529Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.994376939Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.994465570Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.994597818Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.994770025Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.994905552Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.995071955Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.995162444Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.995226090Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.995311531Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.995381126Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.995439857Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.995496737Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.995562834Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.995690534Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.995784781Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.996368885Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.996487000Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.996615830Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.996680007Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.996737660Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.996799361Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.996870290Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.996931848Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.996986327Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.997039743Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.997111324Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.997277153Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.997340470Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.997396051Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
May 26 18:11:56 kind-worker containerd[46]: time="2019-05-26T18:11:56.997456311Z" level=info msg="loading plugin "io.containerd.grpc.v1.cri"..." type=io.containerd.grpc.v1
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:56.997697766Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntime:{Type:io.containerd.runtime.v1.linux Engine: Root: Options:<nil>} UntrustedWorkloadRuntime:{Type: Engine: Root: Options:<nil>} Runtimes:map[] NoPivot:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Auths:map[]} StreamServerAddress:127.0.0.1 StreamServerPort:0 EnableSelinux:false SandboxImage:k8s.gcr.io/pause:3.1 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.003051760Z" level=info msg="Connect containerd service"
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.003281877Z" level=info msg="Get image filesystem path "/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs""
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.003546562Z" level=error msg="Failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.003917214Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.004235370Z" level=info msg=serving... address="/run/containerd/containerd.sock"
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.004328853Z" level=info msg="containerd successfully booted in 0.016052s"
May 26 18:11:57 kind-worker kubelet[45]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 26 18:11:57 kind-worker kubelet[45]: F0526 18:11:57.031783 45 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
May 26 18:11:57 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.043208662Z" level=info msg="Start subscribing containerd event"
May 26 18:11:57 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'.
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.047561519Z" level=info msg="Start recovering state"
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.059672311Z" level=warning msg="The image docker.io/kindest/kindnetd:0.1.0 is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.060879991Z" level=warning msg="The image k8s.gcr.io/coredns:1.3.1 is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.061861018Z" level=warning msg="The image k8s.gcr.io/etcd:3.3.10 is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.062758415Z" level=warning msg="The image k8s.gcr.io/ip-masq-agent:v2.4.1 is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.074981246Z" level=warning msg="The image k8s.gcr.io/kube-apiserver:v1.14.2 is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.075930982Z" level=warning msg="The image k8s.gcr.io/kube-controller-manager:v1.14.2 is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.076849709Z" level=warning msg="The image k8s.gcr.io/kube-proxy:v1.14.2 is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.082478316Z" level=warning msg="The image k8s.gcr.io/kube-scheduler:v1.14.2 is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.083226186Z" level=warning msg="The image k8s.gcr.io/pause:3.1 is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.083952001Z" level=warning msg="The image sha256:19bb968f77bba3a5b5f56b5c033d71f699c22bdc8bbe9412f0bfaf7f674a64cc is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.084690924Z" level=warning msg="The image sha256:1c93cc1335f8df0a96db1a773bb2851920fb574e1c9386f3960674279d5b978b is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.085352505Z" level=warning msg="The image sha256:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.086057992Z" level=warning msg="The image sha256:58f6abb9fb1b336348d3bb9dd80b5ecbc8dc963a3c1c20e778a0c20d3ed25344 is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.086667202Z" level=warning msg="The image sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.091904159Z" level=warning msg="The image sha256:e455634c173b0060e537f229155cbb1649d96945d8de54f3321eebd092d66a0c is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.092652169Z" level=warning msg="The image sha256:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.093336190Z" level=warning msg="The image sha256:f227066bdc5f9aa2f8a9bb54854e5b7a23c6db8fce0f927e5c4feef8a9e74d46 is not unpacked."
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.093857166Z" level=info msg="Start event monitor"
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.093978132Z" level=info msg="Start snapshots syncer"
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.094248167Z" level=info msg="Start streaming server"
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.094194985Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:19bb968f77bba3a5b5f56b5c033d71f699c22bdc8bbe9412f0bfaf7f674a64cc,Labels:map[string]string{io.cri-containerd.image: managed,},}"
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.095205652Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/ip-masq-agent:v2.4.1,Labels:map[string]string{io.cri-containerd.image: managed,},}"
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.095818016Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7387c4b88e2df50ccca4f6f8167992605cfe50d0075a647b5ab5187378ac2bd8,Labels:map[string]string{io.cri-containerd.image: managed,},}"
May 26 18:11:57 kind-worker containerd[46]: time="2019-05-26T18:11:57.096588200Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/kube-proxy:v1.14.2,Labels:map[string]string{io.cri-containerd.image: managed,},}"
May 26 18:12:07 kind-worker systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart.
May 26 18:12:07 kind-worker systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
May 26 18:12:07 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
May 26 18:12:07 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent.
May 26 18:12:07 kind-worker kubelet[64]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 26 18:12:07 kind-worker kubelet[64]: F0526 18:12:07.241852 64 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
May 26 18:12:07 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
May 26 18:12:07 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'.
May 26 18:12:17 kind-worker systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart.
May 26 18:12:17 kind-worker systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
May 26 18:12:17 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
May 26 18:12:17 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent.
May 26 18:12:17 kind-worker kubelet[72]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 26 18:12:17 kind-worker kubelet[72]: F0526 18:12:17.529057 72 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
May 26 18:12:17 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
May 26 18:12:17 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'.
May 26 18:12:27 kind-worker systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart.
May 26 18:12:27 kind-worker systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
May 26 18:12:27 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
May 26 18:12:27 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent.
May 26 18:12:27 kind-worker kubelet[80]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 26 18:12:27 kind-worker kubelet[80]: F0526 18:12:27.761741 80 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
May 26 18:12:27 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
May 26 18:12:27 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'.
May 26 18:12:31 kind-worker containerd[46]: time="2019-05-26T18:12:31.786335741Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
May 26 18:12:37 kind-worker systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart.
May 26 18:12:37 kind-worker systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4.
May 26 18:12:37 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
May 26 18:12:37 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent.
May 26 18:12:37 kind-worker kubelet[117]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 26 18:12:37 kind-worker kubelet[117]: F0526 18:12:37.983346 117 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
May 26 18:12:37 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
May 26 18:12:37 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'.
May 26 18:12:47 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
May 26 18:12:47 kind-worker systemd[1]: Reloading.
May 26 18:12:47 kind-worker systemd[1]: Configuration file /etc/systemd/system/containerd.service.d/10-restart.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
May 26 18:12:47 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent.
May 26 18:12:47 kind-worker kubelet[150]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 26 18:12:47 kind-worker kubelet[150]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 26 18:12:48 kind-worker systemd[1]: Started Kubernetes systemd probe.
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.165119 150 server.go:417] Version: v1.14.2
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.165455 150 plugins.go:103] No cloud provider specified.
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.165480 150 server.go:754] Client rotation is on, will bootstrap in background
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.193996 150 server.go:625] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.197314 150 container_manager_linux.go:261] container manager verified user specified cgroup-root exists: []
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.197347 150 container_manager_linux.go:266] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms}
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.197425 150 container_manager_linux.go:286] Creating device plugin manager: true
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.197541 150 state_mem.go:36] [cpumanager] initializing new in-memory state store
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.207280 150 kubelet.go:279] Adding pod path: /etc/kubernetes/manifests
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.207342 150 kubelet.go:304] Watching apiserver
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.220319 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.220539 150 file.go:98] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:12:48 kind-worker kubelet[150]: W0526 18:12:48.230857 150 util_unix.go:77] Using "/run/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/containerd/containerd.sock".
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.231108 150 remote_runtime.go:62] parsed scheme: ""
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.231172 150 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
May 26 18:12:48 kind-worker kubelet[150]: W0526 18:12:48.231256 150 util_unix.go:77] Using "/run/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/containerd/containerd.sock".
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.231408 150 remote_image.go:50] parsed scheme: ""
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.231495 150 remote_image.go:50] scheme "" not registered, fallback to default scheme
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.231370 150 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{/run/containerd/containerd.sock 0 <nil>}]
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.231807 150 clientconn.go:796] ClientConn switching balancer to "pick_first"
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.231926 150 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000457710, CONNECTING
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.232438 150 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000457710, READY
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.232611 150 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{/run/containerd/containerd.sock 0 <nil>}]
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.232679 150 clientconn.go:796] ClientConn switching balancer to "pick_first"
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.241328 150 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc00035e1e0, CONNECTING
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.241921 150 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc00035e1e0, READY
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.243960 150 kuberuntime_manager.go:210] Container runtime containerd initialized, version: 1.2.6-0ubuntu1, apiVersion: v1alpha2
May 26 18:12:48 kind-worker kubelet[150]: W0526 18:12:48.244365 150 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.248926 150 server.go:1037] Started kubelet
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.250949 150 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.251068 150 status_manager.go:152] Starting to sync pod status with apiserver
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.251139 150 kubelet.go:1806] Starting kubelet main sync loop.
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.251217 150 kubelet.go:1823] skipping pod synchronization - [container runtime status check may not have completed yet., PLEG is not healthy: pleg has yet to be successful.]
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.251370 150 server.go:141] Starting to listen on 0.0.0.0:10250
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.252218 150 server.go:343] Adding debug handlers to kubelet server.
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.253657 150 volume_manager.go:248] Starting Kubelet Volume Manager
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.257857 150 desired_state_of_world_populator.go:130] Desired state populator starts to run
May 26 18:12:48 kind-worker containerd[46]: time="2019-05-26T18:12:48.262263971Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.276100 150 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.284519 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.284948 150 clientconn.go:440] parsed scheme: "unix"
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.285055 150 clientconn.go:440] scheme "unix" not registered, fallback to default scheme
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.285169 150 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{unix:///run/containerd/containerd.sock 0 <nil>}]
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.285250 150 clientconn.go:796] ClientConn switching balancer to "pick_first"
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.285367 150 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0009f3670, CONNECTING
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.285827 150 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0009f3670, READY
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.291417 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.295245 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.295629 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.338049 150 controller.go:115] failed to ensure node lease exists, will retry in 200ms, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.340506 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b25d511c3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a80ed5d1c3, ext:951327693, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a80ed5d1c3, ext:951327693, loc:(*time.Location)(0x7ff4900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.366075 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.368834 150 kubelet.go:1823] skipping pod synchronization - container runtime status check may not have completed yet.
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.372316 150 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.372842 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.381342 150 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.386920 150 kubelet_node_status.go:72] Attempting to register node kind-worker
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.388551 150 cpu_manager.go:155] [cpumanager] starting with none policy
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.388683 150 cpu_manager.go:156] [cpumanager] reconciling every 10s
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.388743 150 policy_none.go:42] [cpumanager] none policy: Start
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.397668 150 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope
May 26 18:12:48 kind-worker kubelet[150]: W0526 18:12:48.424572 150 manager.go:538] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.431735 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e5999", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f1999, ext:1089299342, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f1999, ext:1089299342, loc:(*time.Location)(0x7ff4900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.451882 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e78b7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f38b7, ext:1089307315, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f38b7, ext:1089307315, loc:(*time.Location)(0x7ff4900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.454609 150 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "kind-worker" not found
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.459171 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e89b0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f49b0, ext:1089311665, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f49b0, ext:1089311665, loc:(*time.Location)(0x7ff4900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.464287 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e5999", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f1999, ext:1089299342, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a81727ecc9, ext:1090926289, loc:(*time.Location)(0x7ff4900)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e5999" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.465452 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e78b7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f38b7, ext:1089307315, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8172805c3, ext:1090932679, loc:(*time.Location)(0x7ff4900)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e78b7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.466508 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e89b0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f49b0, ext:1089311665, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a817281732, ext:1090937147, loc:(*time.Location)(0x7ff4900)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e89b0" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.467409 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b30d34d89", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a819d40d89, ext:1135761297, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a819d40d89, ext:1135761297, loc:(*time.Location)(0x7ff4900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.472967 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.551815 150 controller.go:115] failed to ensure node lease exists, will retry in 400ms, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.573261 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.598105 150 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.599638 150 kubelet_node_status.go:72] Attempting to register node kind-worker
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.601261 150 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.601212 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e5999", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f1999, ext:1089299342, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a823bccd49, ext:1302009681, loc:(*time.Location)(0x7ff4900)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e5999" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.602358 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e78b7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f38b7, ext:1089307315, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a823bcf7ac, ext:1302020527, loc:(*time.Location)(0x7ff4900)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e78b7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.603492 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e89b0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f49b0, ext:1089311665, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a823bd6854, ext:1302049369, loc:(*time.Location)(0x7ff4900)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e89b0" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.673626 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.773955 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.874130 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.953393 150 controller.go:115] failed to ensure node lease exists, will retry in 800ms, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.974341 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:49 kind-worker kubelet[150]: I0526 18:12:49.001612 150 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
May 26 18:12:49 kind-worker kubelet[150]: I0526 18:12:49.004439 150 kubelet_node_status.go:72] Attempting to register node kind-worker
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.005887 150 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.006263 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e5999", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f1999, ext:1089299342, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a840432f48, ext:1706836801, loc:(*time.Location)(0x7ff4900)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e5999" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.007354 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e78b7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f38b7, ext:1089307315, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a840435a72, ext:1706847856, loc:(*time.Location)(0x7ff4900)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e78b7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.051138 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e89b0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f49b0, ext:1089311665, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a840436e60, ext:1706852951, loc:(*time.Location)(0x7ff4900)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e89b0" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.074535 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.174714 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.220710 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.274915 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.286063 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.292748 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.296761 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.300024 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.369084 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.375079 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.475271 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.575459 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.675654 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.754919 150 controller.go:115] failed to ensure node lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.775860 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:49 kind-worker kubelet[150]: I0526 18:12:49.806291 150 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
May 26 18:12:49 kind-worker kubelet[150]: I0526 18:12:49.807708 150 kubelet_node_status.go:72] Attempting to register node kind-worker
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.809204 150 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.809281 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e5999", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f1999, ext:1089299342, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a87023e37a, ext:2510092152, loc:(*time.Location)(0x7ff4900)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e5999" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.810137 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e78b7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f38b7, ext:1089307315, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8702416ed, ext:2510105327, loc:(*time.Location)(0x7ff4900)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e78b7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.811073 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e89b0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f49b0, ext:1089311665, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8702432f6, ext:2510112502, loc:(*time.Location)(0x7ff4900)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e89b0" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.876039 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.976237 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.076407 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.176669 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.220978 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.276896 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.287716 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.294146 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.297886 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.301151 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.370613 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.377087 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.478309 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.578569 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.678717 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.779191 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.879390 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.979539 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.079737 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.179939 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.221239 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.280601 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.289165 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.295414 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.298891 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.302311 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.356252 150 controller.go:115] failed to ensure node lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.371757 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.381030 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:51 kind-worker kubelet[150]: I0526 18:12:51.409385 150 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
May 26 18:12:51 kind-worker kubelet[150]: I0526 18:12:51.410633 150 kubelet_node_status.go:72] Attempting to register node kind-worker
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.411952 150 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.412139 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e5999", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f1999, ext:1089299342, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8d8790400, ext:4113017851, loc:(*time.Location)(0x7ff4900)}}, Count:6, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e5999" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.412889 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e78b7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f38b7, ext:1089307315, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8d8793b58, ext:4113032023, loc:(*time.Location)(0x7ff4900)}}, Count:6, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e78b7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.413571 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e89b0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f49b0, ext:1089311665, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8d8795399, ext:4113038225, loc:(*time.Location)(0x7ff4900)}}, Count:6, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e89b0" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.481187 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.581348 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.681467 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.781644 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.881807 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.982174 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.082323 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.182487 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.221544 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.282634 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.290646 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.296815 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.300144 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.303624 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.372892 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.383138 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.483327 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.583525 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.683647 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.783828 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.883993 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.984173 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.084357 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.184542 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.221773 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.284704 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.292248 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.298266 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.301192 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.304601 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.374338 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.385101 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:53 kind-worker containerd[46]: time="2019-05-26T18:12:53.433725012Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.433941 150 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.485281 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.585445 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.685608 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.787227 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.887466 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.987653 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.087848 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.188074 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.222241 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.288262 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.293930 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.299931 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.302706 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.305679 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.375847 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.388454 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.488601 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.557563 150 controller.go:115] failed to ensure node lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.589372 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:54 kind-worker kubelet[150]: I0526 18:12:54.612211 150 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
May 26 18:12:54 kind-worker kubelet[150]: I0526 18:12:54.613353 150 kubelet_node_status.go:72] Attempting to register node kind-worker
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.614751 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e5999", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f1999, ext:1089299342, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a9a48e614f, ext:7315744593, loc:(*time.Location)(0x7ff4900)}}, Count:7, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e5999" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.615273 150 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.615752 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e89b0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f49b0, ext:1089311665, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a9a48eac88, ext:7315763845, loc:(*time.Location)(0x7ff4900)}}, Count:7, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e89b0" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.616480 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e78b7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f38b7, ext:1089307315, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a9a48e98ba, ext:7315758776, loc:(*time.Location)(0x7ff4900)}}, Count:7, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e78b7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.689579 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.789753 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.889942 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.990119 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.090527 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.190717 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.222471 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.290975 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.295594 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.301195 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.303780 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.306788 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.391163 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.421917 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.491370 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.592585 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.692756 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.792926 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.893112 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.993337 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.093527 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.193740 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.222688 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.294099 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.297095 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.302623 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.304680 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.307982 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.394329 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.423549 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.494727 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.594932 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.695114 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.795321 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.897144 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.997337 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.097546 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.197723 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.222960 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.297846 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.298669 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.303906 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.305687 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.308884 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.398066 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.425260 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.498296 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.598563 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.698767 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.798993 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.899256 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.999463 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:58 kind-worker kubelet[150]: E0526 18:12:58.099695 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:58 kind-worker kubelet[150]: I0526 18:12:58.169927 150 transport.go:132] certificate rotation detected, shutting down client connections to start using new credentials
May 26 18:12:58 kind-worker kubelet[150]: E0526 18:12:58.199982 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:58 kind-worker kubelet[150]: E0526 18:12:58.223189 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:12:58 kind-worker kubelet[150]: E0526 18:12:58.302899 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:58 kind-worker kubelet[150]: I0526 18:12:58.392492 150 reconciler.go:154] Reconciler: start to sync state
May 26 18:12:58 kind-worker kubelet[150]: E0526 18:12:58.403091 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:58 kind-worker containerd[46]: time="2019-05-26T18:12:58.434854145Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
May 26 18:12:58 kind-worker kubelet[150]: E0526 18:12:58.435180 150 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
May 26 18:12:58 kind-worker kubelet[150]: E0526 18:12:58.455279 150 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "kind-worker" not found
May 26 18:12:58 kind-worker kubelet[150]: E0526 18:12:58.503293 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:58 kind-worker kubelet[150]: E0526 18:12:58.603477 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:58 kind-worker kubelet[150]: E0526 18:12:58.703659 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:58 kind-worker kubelet[150]: E0526 18:12:58.804100 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:58 kind-worker kubelet[150]: E0526 18:12:58.904344 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:59 kind-worker kubelet[150]: E0526 18:12:59.004743 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:59 kind-worker kubelet[150]: E0526 18:12:59.104908 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:59 kind-worker kubelet[150]: E0526 18:12:59.205134 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:59 kind-worker kubelet[150]: E0526 18:12:59.223410 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:12:59 kind-worker kubelet[150]: E0526 18:12:59.305377 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:59 kind-worker kubelet[150]: E0526 18:12:59.405569 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:59 kind-worker kubelet[150]: E0526 18:12:59.505742 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:59 kind-worker kubelet[150]: E0526 18:12:59.606057 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:59 kind-worker kubelet[150]: E0526 18:12:59.706257 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:59 kind-worker kubelet[150]: E0526 18:12:59.806545 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:59 kind-worker kubelet[150]: E0526 18:12:59.906753 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:13:00 kind-worker kubelet[150]: E0526 18:13:00.007213 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:13:00 kind-worker kubelet[150]: E0526 18:13:00.107425 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:13:00 kind-worker kubelet[150]: E0526 18:13:00.209175 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:13:00 kind-worker kubelet[150]: E0526 18:13:00.223626 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:00 kind-worker kubelet[150]: E0526 18:13:00.309366 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:13:00 kind-worker kubelet[150]: E0526 18:13:00.409596 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:13:00 kind-worker kubelet[150]: E0526 18:13:00.509821 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:13:00 kind-worker kubelet[150]: E0526 18:13:00.610047 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:13:00 kind-worker kubelet[150]: E0526 18:13:00.710270 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:13:00 kind-worker kubelet[150]: E0526 18:13:00.810539 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:13:00 kind-worker kubelet[150]: E0526 18:13:00.910774 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:13:00 kind-worker kubelet[150]: E0526 18:13:00.961977 150 controller.go:194] failed to get node "kind-worker" when trying to set owner ref to the node lease: nodes "kind-worker" not found
May 26 18:13:01 kind-worker kubelet[150]: E0526 18:13:01.011041 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:13:01 kind-worker kubelet[150]: I0526 18:13:01.015486 150 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
May 26 18:13:01 kind-worker kubelet[150]: I0526 18:13:01.016580 150 kubelet_node_status.go:72] Attempting to register node kind-worker
May 26 18:13:01 kind-worker kubelet[150]: I0526 18:13:01.029230 150 kubelet_node_status.go:75] Successfully registered node kind-worker
May 26 18:13:01 kind-worker kubelet[150]: I0526 18:13:01.111239 150 kuberuntime_manager.go:946] updating runtime config through cri with podcidr 10.244.1.0/24
May 26 18:13:01 kind-worker containerd[46]: time="2019-05-26T18:13:01.111586003Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
May 26 18:13:01 kind-worker kubelet[150]: I0526 18:13:01.111899 150 kubelet_network.go:77] Setting Pod CIDR: -> 10.244.1.0/24
May 26 18:13:01 kind-worker containerd[46]: time="2019-05-26T18:13:01.112168294Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
May 26 18:13:01 kind-worker kubelet[150]: E0526 18:13:01.112376 150 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
May 26 18:13:01 kind-worker kubelet[150]: E0526 18:13:01.224043 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:01 kind-worker kubelet[150]: I0526 18:13:01.299996 150 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-qsjml" (UniqueName: "kubernetes.io/secret/e5bf0d1f-7fe1-11e9-83f7-0242ac110003-kindnet-token-qsjml") pod "kindnet-ntf8l" (UID: "e5bf0d1f-7fe1-11e9-83f7-0242ac110003")
May 26 18:13:01 kind-worker kubelet[150]: I0526 18:13:01.300053 150 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/e5be4844-7fe1-11e9-83f7-0242ac110003-kube-proxy") pod "kube-proxy-btr2x" (UID: "e5be4844-7fe1-11e9-83f7-0242ac110003")
May 26 18:13:01 kind-worker kubelet[150]: I0526 18:13:01.300086 150 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/e5be4844-7fe1-11e9-83f7-0242ac110003-xtables-lock") pod "kube-proxy-btr2x" (UID: "e5be4844-7fe1-11e9-83f7-0242ac110003")
May 26 18:13:01 kind-worker kubelet[150]: I0526 18:13:01.300113 150 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/e5be4844-7fe1-11e9-83f7-0242ac110003-lib-modules") pod "kube-proxy-btr2x" (UID: "e5be4844-7fe1-11e9-83f7-0242ac110003")
May 26 18:13:01 kind-worker kubelet[150]: I0526 18:13:01.300141 150 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-tqsn6" (UniqueName: "kubernetes.io/secret/e5be4844-7fe1-11e9-83f7-0242ac110003-kube-proxy-token-tqsn6") pod "kube-proxy-btr2x" (UID: "e5be4844-7fe1-11e9-83f7-0242ac110003")
May 26 18:13:01 kind-worker kubelet[150]: I0526 18:13:01.300171 150 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config" (UniqueName: "kubernetes.io/configmap/e5d062ca-7fe1-11e9-83f7-0242ac110003-config") pod "ip-masq-agent-bq6xs" (UID: "e5d062ca-7fe1-11e9-83f7-0242ac110003")
May 26 18:13:01 kind-worker kubelet[150]: I0526 18:13:01.300199 150 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ip-masq-agent-token-pqnvx" (UniqueName: "kubernetes.io/secret/e5d062ca-7fe1-11e9-83f7-0242ac110003-ip-masq-agent-token-pqnvx") pod "ip-masq-agent-bq6xs" (UID: "e5d062ca-7fe1-11e9-83f7-0242ac110003")
May 26 18:13:01 kind-worker kubelet[150]: I0526 18:13:01.300225 150 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/e5bf0d1f-7fe1-11e9-83f7-0242ac110003-cni-cfg") pod "kindnet-ntf8l" (UID: "e5bf0d1f-7fe1-11e9-83f7-0242ac110003")
May 26 18:13:01 kind-worker containerd[46]: time="2019-05-26T18:13:01.485210404Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-btr2x,Uid:e5be4844-7fe1-11e9-83f7-0242ac110003,Namespace:kube-system,Attempt:0,}"
May 26 18:13:01 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount313757627.mount: Succeeded.
May 26 18:13:01 kind-worker containerd[46]: time="2019-05-26T18:13:01.513026252Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:ip-masq-agent-bq6xs,Uid:e5d062ca-7fe1-11e9-83f7-0242ac110003,Namespace:kube-system,Attempt:0,}"
May 26 18:13:01 kind-worker containerd[46]: time="2019-05-26T18:13:01.527235700Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-ntf8l,Uid:e5bf0d1f-7fe1-11e9-83f7-0242ac110003,Namespace:kube-system,Attempt:0,}"
May 26 18:13:01 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount804188069.mount: Succeeded.
May 26 18:13:01 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount258220831.mount: Succeeded.
May 26 18:13:01 kind-worker containerd[46]: time="2019-05-26T18:13:01.579774350Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/c4ebde15572dbb2f4e972c168511db0431bc8553fcb5bc4e8a519daa8fb2520d/shim.sock" debug=false pid=193
May 26 18:13:01 kind-worker systemd[1]: run-containerd-runc-k8s.io-c4ebde15572dbb2f4e972c168511db0431bc8553fcb5bc4e8a519daa8fb2520d-runc.wa1N3a.mount: Succeeded.
May 26 18:13:01 kind-worker containerd[46]: time="2019-05-26T18:13:01.604123032Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/afbf973e46f02ec5f2a9579f24f61281a485a21587c137c1a37247c63bb5c21d/shim.sock" debug=false pid=212
May 26 18:13:01 kind-worker systemd[1]: run-containerd-runc-k8s.io-afbf973e46f02ec5f2a9579f24f61281a485a21587c137c1a37247c63bb5c21d-runc.hJULAc.mount: Succeeded.
May 26 18:13:01 kind-worker containerd[46]: time="2019-05-26T18:13:01.649363377Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/74a0314e32537bf951b15c6cafff401eafb06634112dac5d3f37f5ab800e3c53/shim.sock" debug=false pid=230
May 26 18:13:01 kind-worker systemd[1]: run-containerd-runc-k8s.io-74a0314e32537bf951b15c6cafff401eafb06634112dac5d3f37f5ab800e3c53-runc.Q0bEMi.mount: Succeeded.
May 26 18:13:01 kind-worker containerd[46]: time="2019-05-26T18:13:01.860006893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-btr2x,Uid:e5be4844-7fe1-11e9-83f7-0242ac110003,Namespace:kube-system,Attempt:0,} returns sandbox id "afbf973e46f02ec5f2a9579f24f61281a485a21587c137c1a37247c63bb5c21d""
May 26 18:13:01 kind-worker containerd[46]: time="2019-05-26T18:13:01.873146635Z" level=info msg="CreateContainer within sandbox "afbf973e46f02ec5f2a9579f24f61281a485a21587c137c1a37247c63bb5c21d" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
May 26 18:13:01 kind-worker containerd[46]: time="2019-05-26T18:13:01.932239610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:ip-masq-agent-bq6xs,Uid:e5d062ca-7fe1-11e9-83f7-0242ac110003,Namespace:kube-system,Attempt:0,} returns sandbox id "c4ebde15572dbb2f4e972c168511db0431bc8553fcb5bc4e8a519daa8fb2520d""
May 26 18:13:01 kind-worker containerd[46]: time="2019-05-26T18:13:01.941955877Z" level=info msg="CreateContainer within sandbox "c4ebde15572dbb2f4e972c168511db0431bc8553fcb5bc4e8a519daa8fb2520d" for container &ContainerMetadata{Name:ip-masq-agent,Attempt:0,}"
May 26 18:13:02 kind-worker containerd[46]: time="2019-05-26T18:13:02.102819413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-ntf8l,Uid:e5bf0d1f-7fe1-11e9-83f7-0242ac110003,Namespace:kube-system,Attempt:0,} returns sandbox id "74a0314e32537bf951b15c6cafff401eafb06634112dac5d3f37f5ab800e3c53""
May 26 18:13:02 kind-worker containerd[46]: time="2019-05-26T18:13:02.153346509Z" level=info msg="CreateContainer within sandbox "74a0314e32537bf951b15c6cafff401eafb06634112dac5d3f37f5ab800e3c53" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
May 26 18:13:02 kind-worker kubelet[150]: E0526 18:13:02.224345 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:03 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount437427915.mount: Succeeded.
May 26 18:13:03 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount775187253.mount: Succeeded.
May 26 18:13:03 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount400337168.mount: Succeeded.
May 26 18:13:03 kind-worker kubelet[150]: E0526 18:13:03.224549 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:03 kind-worker containerd[46]: time="2019-05-26T18:13:03.236498653Z" level=info msg="CreateContainer within sandbox "74a0314e32537bf951b15c6cafff401eafb06634112dac5d3f37f5ab800e3c53" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id "dd6482736f138d2ff6c3aad2e0690d564adc0a5849655d555eba1d48700911b2""
May 26 18:13:03 kind-worker containerd[46]: time="2019-05-26T18:13:03.284285412Z" level=info msg="StartContainer for "dd6482736f138d2ff6c3aad2e0690d564adc0a5849655d555eba1d48700911b2""
May 26 18:13:03 kind-worker containerd[46]: time="2019-05-26T18:13:03.287049577Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/dd6482736f138d2ff6c3aad2e0690d564adc0a5849655d555eba1d48700911b2/shim.sock" debug=false pid=359
May 26 18:13:03 kind-worker systemd[1]: run-containerd-runc-k8s.io-dd6482736f138d2ff6c3aad2e0690d564adc0a5849655d555eba1d48700911b2-runc.Vel4LV.mount: Succeeded.
May 26 18:13:03 kind-worker containerd[46]: time="2019-05-26T18:13:03.467012087Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
May 26 18:13:03 kind-worker kubelet[150]: E0526 18:13:03.594795 150 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
May 26 18:13:03 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount037001197.mount: Succeeded.
May 26 18:13:03 kind-worker containerd[46]: time="2019-05-26T18:13:03.831779196Z" level=info msg="StartContainer for "dd6482736f138d2ff6c3aad2e0690d564adc0a5849655d555eba1d48700911b2" returns successfully"
May 26 18:13:03 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount572053978.mount: Succeeded.
May 26 18:13:04 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount129249092.mount: Succeeded.
May 26 18:13:04 kind-worker kubelet[150]: E0526 18:13:04.224705 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:04 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount836641558.mount: Succeeded.
May 26 18:13:04 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount733431992.mount: Succeeded.
May 26 18:13:04 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount808056684.mount: Succeeded.
May 26 18:13:04 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount801588699.mount: Succeeded.
May 26 18:13:04 kind-worker containerd[46]: time="2019-05-26T18:13:04.520658348Z" level=info msg="CreateContainer within sandbox "c4ebde15572dbb2f4e972c168511db0431bc8553fcb5bc4e8a519daa8fb2520d" for &ContainerMetadata{Name:ip-masq-agent,Attempt:0,} returns container id "bdcacad189d30de4dcf28a1b4607519be2f08ba9757e6d8680aaf18df9cdc6a3""
May 26 18:13:04 kind-worker containerd[46]: time="2019-05-26T18:13:04.523057760Z" level=info msg="StartContainer for "bdcacad189d30de4dcf28a1b4607519be2f08ba9757e6d8680aaf18df9cdc6a3""
May 26 18:13:04 kind-worker containerd[46]: time="2019-05-26T18:13:04.526601864Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/bdcacad189d30de4dcf28a1b4607519be2f08ba9757e6d8680aaf18df9cdc6a3/shim.sock" debug=false pid=412
May 26 18:13:04 kind-worker containerd[46]: time="2019-05-26T18:13:04.971394474Z" level=info msg="StartContainer for "bdcacad189d30de4dcf28a1b4607519be2f08ba9757e6d8680aaf18df9cdc6a3" returns successfully"
May 26 18:13:04 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount907444394.mount: Succeeded.
May 26 18:13:05 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount727051071.mount: Succeeded.
May 26 18:13:05 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount762866578.mount: Succeeded.
May 26 18:13:05 kind-worker containerd[46]: time="2019-05-26T18:13:05.103711799Z" level=info msg="CreateContainer within sandbox "afbf973e46f02ec5f2a9579f24f61281a485a21587c137c1a37247c63bb5c21d" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id "aa210783339eb5684ec4dd2e93f9d48c52893c9fbab1f6e40be87a0e4bff9fcb""
May 26 18:13:05 kind-worker containerd[46]: time="2019-05-26T18:13:05.104530513Z" level=info msg="StartContainer for "aa210783339eb5684ec4dd2e93f9d48c52893c9fbab1f6e40be87a0e4bff9fcb""
May 26 18:13:05 kind-worker containerd[46]: time="2019-05-26T18:13:05.105690740Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/aa210783339eb5684ec4dd2e93f9d48c52893c9fbab1f6e40be87a0e4bff9fcb/shim.sock" debug=false pid=467
May 26 18:13:05 kind-worker systemd[1]: run-containerd-runc-k8s.io-aa210783339eb5684ec4dd2e93f9d48c52893c9fbab1f6e40be87a0e4bff9fcb-runc.R2wOQN.mount: Succeeded.
May 26 18:13:05 kind-worker kubelet[150]: E0526 18:13:05.224931 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:05 kind-worker containerd[46]: time="2019-05-26T18:13:05.451105059Z" level=info msg="StartContainer for "aa210783339eb5684ec4dd2e93f9d48c52893c9fbab1f6e40be87a0e4bff9fcb" returns successfully"
May 26 18:13:06 kind-worker kubelet[150]: E0526 18:13:06.226473 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:07 kind-worker kubelet[150]: E0526 18:13:07.226693 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:08 kind-worker kubelet[150]: E0526 18:13:08.207516 150 file.go:104] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:08 kind-worker kubelet[150]: E0526 18:13:08.226953 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:08 kind-worker kubelet[150]: E0526 18:13:08.473934 150 summary_sys_containers.go:47] Failed to get system container stats for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": failed to get cgroup stats for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": failed to get container info for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": unknown container "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service"
May 26 18:13:08 kind-worker containerd[46]: time="2019-05-26T18:13:08.595617544Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
May 26 18:13:08 kind-worker kubelet[150]: E0526 18:13:08.596388 150 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
May 26 18:13:09 kind-worker kubelet[150]: E0526 18:13:09.227257 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:10 kind-worker kubelet[150]: E0526 18:13:10.227500 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:11 kind-worker kubelet[150]: E0526 18:13:11.227757 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:12 kind-worker kubelet[150]: E0526 18:13:12.228012 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:13 kind-worker kubelet[150]: E0526 18:13:13.228920 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:13 kind-worker containerd[46]: time="2019-05-26T18:13:13.597156379Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
May 26 18:13:13 kind-worker kubelet[150]: E0526 18:13:13.597448 150 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
May 26 18:13:14 kind-worker kubelet[150]: E0526 18:13:14.229209 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:15 kind-worker kubelet[150]: E0526 18:13:15.229429 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:16 kind-worker kubelet[150]: E0526 18:13:16.229718 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:17 kind-worker kubelet[150]: E0526 18:13:17.230042 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:18 kind-worker kubelet[150]: E0526 18:13:18.230280 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:18 kind-worker kubelet[150]: E0526 18:13:18.495285 150 summary_sys_containers.go:47] Failed to get system container stats for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": failed to get cgroup stats for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": failed to get container info for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": unknown container "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service"
May 26 18:13:18 kind-worker containerd[46]: time="2019-05-26T18:13:18.598350917Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
May 26 18:13:18 kind-worker kubelet[150]: E0526 18:13:18.599086 150 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
May 26 18:13:19 kind-worker kubelet[150]: E0526 18:13:19.230573 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:20 kind-worker kubelet[150]: E0526 18:13:20.230925 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:21 kind-worker kubelet[150]: E0526 18:13:21.231201 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:22 kind-worker kubelet[150]: E0526 18:13:22.231518 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:23 kind-worker kubelet[150]: E0526 18:13:23.231811 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:23 kind-worker containerd[46]: time="2019-05-26T18:13:23.599748753Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
May 26 18:13:23 kind-worker kubelet[150]: E0526 18:13:23.600418 150 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
May 26 18:13:24 kind-worker kubelet[150]: E0526 18:13:24.232126 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:25 kind-worker kubelet[150]: E0526 18:13:25.232392 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:26 kind-worker kubelet[150]: E0526 18:13:26.232660 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:27 kind-worker kubelet[150]: E0526 18:13:27.233448 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:28 kind-worker kubelet[150]: E0526 18:13:28.207605 150 file.go:104] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:28 kind-worker kubelet[150]: E0526 18:13:28.233732 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:28 kind-worker kubelet[150]: E0526 18:13:28.539162 150 summary_sys_containers.go:47] Failed to get system container stats for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": failed to get cgroup stats for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": failed to get container info for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": unknown container "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service"
May 26 18:13:28 kind-worker containerd[46]: time="2019-05-26T18:13:28.601260025Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
May 26 18:13:28 kind-worker kubelet[150]: E0526 18:13:28.601984 150 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
May 26 18:13:29 kind-worker kubelet[150]: E0526 18:13:29.233993 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:30 kind-worker kubelet[150]: E0526 18:13:30.234274 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:31 kind-worker kubelet[150]: E0526 18:13:31.234578 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:32 kind-worker kubelet[150]: E0526 18:13:32.235404 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:33 kind-worker kubelet[150]: E0526 18:13:33.235657 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:33 kind-worker containerd[46]: time="2019-05-26T18:13:33.602895744Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
May 26 18:13:33 kind-worker kubelet[150]: E0526 18:13:33.603240 150 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
May 26 18:13:33 kind-worker containerd[46]: time="2019-05-26T18:13:33.823361052Z" level=info msg="Finish piping stderr of container "dd6482736f138d2ff6c3aad2e0690d564adc0a5849655d555eba1d48700911b2""
May 26 18:13:33 kind-worker containerd[46]: time="2019-05-26T18:13:33.823433789Z" level=info msg="Finish piping stdout of container "dd6482736f138d2ff6c3aad2e0690d564adc0a5849655d555eba1d48700911b2""
May 26 18:13:33 kind-worker containerd[46]: time="2019-05-26T18:13:33.859498828Z" level=info msg="TaskExit event &TaskExit{ContainerID:dd6482736f138d2ff6c3aad2e0690d564adc0a5849655d555eba1d48700911b2,ID:dd6482736f138d2ff6c3aad2e0690d564adc0a5849655d555eba1d48700911b2,Pid:378,ExitStatus:2,ExitedAt:2019-05-26 18:13:33.823895051 +0000 UTC,}"
May 26 18:13:33 kind-worker systemd[1]: run-containerd-io.containerd.runtime.v1.linux-k8s.io-dd6482736f138d2ff6c3aad2e0690d564adc0a5849655d555eba1d48700911b2-rootfs.mount: Succeeded.
May 26 18:13:33 kind-worker containerd[46]: time="2019-05-26T18:13:33.905400278Z" level=info msg="shim reaped" id=dd6482736f138d2ff6c3aad2e0690d564adc0a5849655d555eba1d48700911b2
May 26 18:13:34 kind-worker containerd[46]: time="2019-05-26T18:13:34.060682049Z" level=info msg="CreateContainer within sandbox "74a0314e32537bf951b15c6cafff401eafb06634112dac5d3f37f5ab800e3c53" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
May 26 18:13:34 kind-worker containerd[46]: time="2019-05-26T18:13:34.115186291Z" level=info msg="CreateContainer within sandbox "74a0314e32537bf951b15c6cafff401eafb06634112dac5d3f37f5ab800e3c53" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id "8c4d33757e16c8ba5f40623ad20c050b2e8acef701913cc15f5f7100c8d0ad38""
May 26 18:13:34 kind-worker containerd[46]: time="2019-05-26T18:13:34.116148002Z" level=info msg="StartContainer for "8c4d33757e16c8ba5f40623ad20c050b2e8acef701913cc15f5f7100c8d0ad38""
May 26 18:13:34 kind-worker containerd[46]: time="2019-05-26T18:13:34.117074579Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/8c4d33757e16c8ba5f40623ad20c050b2e8acef701913cc15f5f7100c8d0ad38/shim.sock" debug=false pid=634
May 26 18:13:34 kind-worker systemd[1]: run-containerd-runc-k8s.io-8c4d33757e16c8ba5f40623ad20c050b2e8acef701913cc15f5f7100c8d0ad38-runc.ngX96U.mount: Succeeded.
May 26 18:13:34 kind-worker kubelet[150]: E0526 18:13:34.235910 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:34 kind-worker containerd[46]: time="2019-05-26T18:13:34.397934300Z" level=info msg="StartContainer for "8c4d33757e16c8ba5f40623ad20c050b2e8acef701913cc15f5f7100c8d0ad38" returns successfully"
May 26 18:13:35 kind-worker kubelet[150]: E0526 18:13:35.236146 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:36 kind-worker kubelet[150]: E0526 18:13:36.236798 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:37 kind-worker kubelet[150]: E0526 18:13:37.237147 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:38 kind-worker kubelet[150]: E0526 18:13:38.237501 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:38 kind-worker kubelet[150]: E0526 18:13:38.569334 150 summary_sys_containers.go:47] Failed to get system container stats for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": failed to get cgroup stats for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": failed to get container info for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": unknown container "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service"
May 26 18:13:39 kind-worker kubelet[150]: E0526 18:13:39.237768 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:40 kind-worker kubelet[150]: E0526 18:13:40.238053 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:41 kind-worker kubelet[150]: E0526 18:13:41.238295 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:42 kind-worker kubelet[150]: E0526 18:13:42.238682 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:43 kind-worker kubelet[150]: E0526 18:13:43.238971 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:44 kind-worker kubelet[150]: E0526 18:13:44.239922 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:45 kind-worker kubelet[150]: E0526 18:13:45.240165 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:46 kind-worker kubelet[150]: E0526 18:13:46.240373 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:47 kind-worker kubelet[150]: E0526 18:13:47.240551 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:48 kind-worker kubelet[150]: E0526 18:13:48.207556 150 file.go:104] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:48 kind-worker kubelet[150]: E0526 18:13:48.240919 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:48 kind-worker kubelet[150]: E0526 18:13:48.593765 150 summary_sys_containers.go:47] Failed to get system container stats for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": failed to get cgroup stats for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": failed to get container info for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": unknown container "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service"
May 26 18:13:48 kind-worker kubelet[150]: E0526 18:13:48.901095 150 reflector.go:126] object-"default"/"default-token-n7dzs": Failed to list *v1.Secret: secrets "default-token-n7dzs" is forbidden: User "system:node:kind-worker" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node "kind-worker" and this object
May 26 18:13:49 kind-worker kubelet[150]: I0526 18:13:49.062881 150 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-n7dzs" (UniqueName: "kubernetes.io/secret/e8bf84f9-7fe1-11e9-83f7-0242ac110003-default-token-n7dzs") pod "hello-6d6586c69c-z62fd" (UID: "e8bf84f9-7fe1-11e9-83f7-0242ac110003")
May 26 18:13:49 kind-worker kubelet[150]: E0526 18:13:49.241163 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:50 kind-worker containerd[46]: time="2019-05-26T18:13:50.102626775Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:hello-6d6586c69c-z62fd,Uid:e8bf84f9-7fe1-11e9-83f7-0242ac110003,Namespace:default,Attempt:0,}"
May 26 18:13:50 kind-worker containerd[46]: time="2019-05-26T18:13:50.169168605Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/d02416a779d2e18fd0903eecfd151cb270e1398fd2f26ccbea841aaa74dbfb61/shim.sock" debug=false pid=756
May 26 18:13:50 kind-worker kubelet[150]: E0526 18:13:50.241427 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:50 kind-worker containerd[46]: time="2019-05-26T18:13:50.321808570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:hello-6d6586c69c-z62fd,Uid:e8bf84f9-7fe1-11e9-83f7-0242ac110003,Namespace:default,Attempt:0,} returns sandbox id "d02416a779d2e18fd0903eecfd151cb270e1398fd2f26ccbea841aaa74dbfb61""
May 26 18:13:50 kind-worker containerd[46]: time="2019-05-26T18:13:50.324694756Z" level=info msg="PullImage "alpine:latest""
May 26 18:13:51 kind-worker kubelet[150]: E0526 18:13:51.241747 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:51 kind-worker containerd[46]: time="2019-05-26T18:13:51.472373222Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/library/alpine:latest,Labels:map[string]string{},}"
May 26 18:13:51 kind-worker containerd[46]: time="2019-05-26T18:13:51.480542026Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:055936d3920576da37aa9bc460d70c5f212028bda1c08c0879aedf03d7a66ea1,Labels:map[string]string{io.cri-containerd.image: managed,},}"
May 26 18:13:51 kind-worker containerd[46]: time="2019-05-26T18:13:51.481257780Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/alpine:latest,Labels:map[string]string{io.cri-containerd.image: managed,},}"
May 26 18:13:51 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount176716470.mount: Succeeded.
May 26 18:13:51 kind-worker containerd[46]: time="2019-05-26T18:13:51.631930032Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:055936d3920576da37aa9bc460d70c5f212028bda1c08c0879aedf03d7a66ea1,Labels:map[string]string{io.cri-containerd.image: managed,},}"
May 26 18:13:51 kind-worker containerd[46]: time="2019-05-26T18:13:51.633416846Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/alpine:latest,Labels:map[string]string{io.cri-containerd.image: managed,},}"
May 26 18:13:51 kind-worker containerd[46]: time="2019-05-26T18:13:51.639428372Z" level=info msg="PullImage "alpine:latest" returns image reference "sha256:055936d3920576da37aa9bc460d70c5f212028bda1c08c0879aedf03d7a66ea1""
May 26 18:13:51 kind-worker containerd[46]: time="2019-05-26T18:13:51.639625825Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/library/alpine@sha256:769fddc7cc2f0a1c35abb2f91432e8beecf83916c421420e6a6da9f8975464b6,Labels:map[string]string{io.cri-containerd.image: managed,},}"
May 26 18:13:51 kind-worker containerd[46]: time="2019-05-26T18:13:51.641864830Z" level=info msg="CreateContainer within sandbox "d02416a779d2e18fd0903eecfd151cb270e1398fd2f26ccbea841aaa74dbfb61" for container &ContainerMetadata{Name:hello,Attempt:0,}"
May 26 18:13:51 kind-worker containerd[46]: time="2019-05-26T18:13:51.699340924Z" level=info msg="CreateContainer within sandbox "d02416a779d2e18fd0903eecfd151cb270e1398fd2f26ccbea841aaa74dbfb61" for &ContainerMetadata{Name:hello,Attempt:0,} returns container id "b0bc1a1cb776b3cfd164c412fa850453414cae4c9b084f3c90b4f14b68df6e7d""
May 26 18:13:51 kind-worker containerd[46]: time="2019-05-26T18:13:51.700121136Z" level=info msg="StartContainer for "b0bc1a1cb776b3cfd164c412fa850453414cae4c9b084f3c90b4f14b68df6e7d""
May 26 18:13:51 kind-worker containerd[46]: time="2019-05-26T18:13:51.701069220Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/b0bc1a1cb776b3cfd164c412fa850453414cae4c9b084f3c90b4f14b68df6e7d/shim.sock" debug=false pid=812
May 26 18:13:51 kind-worker containerd[46]: time="2019-05-26T18:13:51.860954184Z" level=info msg="StartContainer for "b0bc1a1cb776b3cfd164c412fa850453414cae4c9b084f3c90b4f14b68df6e7d" returns successfully"
May 26 18:13:52 kind-worker containerd[46]: time="2019-05-26T18:13:52.125427681Z" level=info msg="Attach for "b0bc1a1cb776b3cfd164c412fa850453414cae4c9b084f3c90b4f14b68df6e7d" with tty true and stdin true"
May 26 18:13:52 kind-worker containerd[46]: time="2019-05-26T18:13:52.125539133Z" level=info msg="Attach for "b0bc1a1cb776b3cfd164c412fa850453414cae4c9b084f3c90b4f14b68df6e7d" returns URL "http://127.0.0.1:45391/attach/C3KQSoVo""
May 26 18:13:52 kind-worker kubelet[150]: E0526 18:13:52.241996 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:53 kind-worker kubelet[150]: E0526 18:13:53.242252 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:54 kind-worker kubelet[150]: E0526 18:13:54.252148 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:55 kind-worker kubelet[150]: E0526 18:13:55.253062 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:56 kind-worker kubelet[150]: E0526 18:13:56.253773 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:57 kind-worker kubelet[150]: E0526 18:13:57.254455 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:58 kind-worker kubelet[150]: E0526 18:13:58.254719 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:58 kind-worker kubelet[150]: E0526 18:13:58.620330 150 summary_sys_containers.go:47] Failed to get system container stats for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": failed to get cgroup stats for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": failed to get container info for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": unknown container "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service"
May 26 18:13:59 kind-worker kubelet[150]: E0526 18:13:59.254943 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:14:00 kind-worker kubelet[150]: E0526 18:14:00.255165 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:14:01 kind-worker kubelet[150]: E0526 18:14:01.256199 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:14:01 kind-worker containerd[46]: time="2019-05-26T18:14:01.868417524Z" level=info msg="Finish piping stdout of container "b0bc1a1cb776b3cfd164c412fa850453414cae4c9b084f3c90b4f14b68df6e7d""
May 26 18:14:01 kind-worker containerd[46]: time="2019-05-26T18:14:01.868472846Z" level=info msg="Attach stream "b0bc1a1cb776b3cfd164c412fa850453414cae4c9b084f3c90b4f14b68df6e7d-attach-8b8504e51bd29f191fca68333687e4288c5eb596701a04931f99ffecd2ca661d-stdout" closed"
May 26 18:14:01 kind-worker containerd[46]: time="2019-05-26T18:14:01.870223959Z" level=info msg="Attach stream "b0bc1a1cb776b3cfd164c412fa850453414cae4c9b084f3c90b4f14b68df6e7d-attach-8b8504e51bd29f191fca68333687e4288c5eb596701a04931f99ffecd2ca661d-stdin" closed"
May 26 18:14:01 kind-worker kubelet[150]: E0526 18:14:01.875034 150 upgradeaware.go:370] Error proxying data from client to backend: readfrom tcp 127.0.0.1:42634->127.0.0.1:45391: write tcp 127.0.0.1:42634->127.0.0.1:45391: write: broken pipe
May 26 18:14:01 kind-worker containerd[46]: time="2019-05-26T18:14:01.923707837Z" level=info msg="TaskExit event &TaskExit{ContainerID:b0bc1a1cb776b3cfd164c412fa850453414cae4c9b084f3c90b4f14b68df6e7d,ID:b0bc1a1cb776b3cfd164c412fa850453414cae4c9b084f3c90b4f14b68df6e7d,Pid:829,ExitStatus:127,ExitedAt:2019-05-26 18:14:01.867665909 +0000 UTC,}"
May 26 18:14:01 kind-worker systemd[1]: run-containerd-io.containerd.runtime.v1.linux-k8s.io-b0bc1a1cb776b3cfd164c412fa850453414cae4c9b084f3c90b4f14b68df6e7d-rootfs.mount: Succeeded.
May 26 18:14:01 kind-worker containerd[46]: time="2019-05-26T18:14:01.993627385Z" level=info msg="shim reaped" id=b0bc1a1cb776b3cfd164c412fa850453414cae4c9b084f3c90b4f14b68df6e7d
May 26 18:14:02 kind-worker containerd[46]: time="2019-05-26T18:14:02.110581383Z" level=info msg="PullImage "alpine:latest""
May 26 18:14:02 kind-worker kubelet[150]: E0526 18:14:02.260644 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:14:02 kind-worker containerd[46]: time="2019-05-26T18:14:02.504723743Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/alpine:latest,Labels:map[string]string{},}"
May 26 18:14:02 kind-worker containerd[46]: time="2019-05-26T18:14:02.515346849Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:055936d3920576da37aa9bc460d70c5f212028bda1c08c0879aedf03d7a66ea1,Labels:map[string]string{io.cri-containerd.image: managed,},}"
May 26 18:14:02 kind-worker containerd[46]: time="2019-05-26T18:14:02.516010810Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:055936d3920576da37aa9bc460d70c5f212028bda1c08c0879aedf03d7a66ea1,Labels:map[string]string{io.cri-containerd.image: managed,},}"
May 26 18:14:02 kind-worker containerd[46]: time="2019-05-26T18:14:02.516807084Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/alpine:latest,Labels:map[string]string{io.cri-containerd.image: managed,},}"
May 26 18:14:02 kind-worker containerd[46]: time="2019-05-26T18:14:02.521664450Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/alpine:latest,Labels:map[string]string{io.cri-containerd.image: managed,},}"
May 26 18:14:02 kind-worker containerd[46]: time="2019-05-26T18:14:02.523975911Z" level=info msg="PullImage "alpine:latest" returns image reference "sha256:055936d3920576da37aa9bc460d70c5f212028bda1c08c0879aedf03d7a66ea1""
May 26 18:14:02 kind-worker containerd[46]: time="2019-05-26T18:14:02.531255956Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/alpine@sha256:769fddc7cc2f0a1c35abb2f91432e8beecf83916c421420e6a6da9f8975464b6,Labels:map[string]string{io.cri-containerd.image: managed,},}"
May 26 18:14:02 kind-worker containerd[46]: time="2019-05-26T18:14:02.539946413Z" level=info msg="CreateContainer within sandbox "d02416a779d2e18fd0903eecfd151cb270e1398fd2f26ccbea841aaa74dbfb61" for container &ContainerMetadata{Name:hello,Attempt:1,}"
May 26 18:14:02 kind-worker containerd[46]: time="2019-05-26T18:14:02.620026145Z" level=info msg="CreateContainer within sandbox "d02416a779d2e18fd0903eecfd151cb270e1398fd2f26ccbea841aaa74dbfb61" for &ContainerMetadata{Name:hello,Attempt:1,} returns container id "1a0701faa89a97e204050b03586242a9f2e9f466507970075fc4bf7365538e10""
May 26 18:14:02 kind-worker containerd[46]: time="2019-05-26T18:14:02.621774511Z" level=info msg="StartContainer for "1a0701faa89a97e204050b03586242a9f2e9f466507970075fc4bf7365538e10""
May 26 18:14:02 kind-worker containerd[46]: time="2019-05-26T18:14:02.622651491Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/1a0701faa89a97e204050b03586242a9f2e9f466507970075fc4bf7365538e10/shim.sock" debug=false pid=918
May 26 18:14:02 kind-worker containerd[46]: time="2019-05-26T18:14:02.867914974Z" level=info msg="StartContainer for "1a0701faa89a97e204050b03586242a9f2e9f466507970075fc4bf7365538e10" returns successfully"
2019-05-26T18:13:34.412623745Z stdout F hostIP = 172.17.0.2
2019-05-26T18:13:34.41268209Z stdout F podIP = 172.17.0.2
2019-05-26T18:13:34.420492739Z stdout F Handling node with IP: 172.17.0.3
2019-05-26T18:13:34.420526372Z stdout F Node kind-control-plane has CIDR 10.244.0.0/24
2019-05-26T18:13:34.503835485Z stdout F Adding route {Ifindex: 0 Dst: 10.244.0.0/24 Src: <nil> Gw: 172.17.0.3 Flags: [] Table: 0}
2019-05-26T18:13:34.503892355Z stdout F Handling node with IP: 172.17.0.2
2019-05-26T18:13:34.503899772Z stdout F handling current node
2019-05-26T18:13:34.507852498Z stdout F Handling node with IP: 172.17.0.4
2019-05-26T18:13:34.507899684Z stdout F Node kind-worker2 has CIDR 10.244.2.0/24
2019-05-26T18:13:34.507907027Z stdout F Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.17.0.4 Flags: [] Table: 0}
2019-05-26T18:13:44.511715214Z stdout F Handling node with IP: 172.17.0.3
2019-05-26T18:13:44.511754403Z stdout F Node kind-control-plane has CIDR 10.244.0.0/24
2019-05-26T18:13:44.511761183Z stdout F Handling node with IP: 172.17.0.2
2019-05-26T18:13:44.511764648Z stdout F handling current node
2019-05-26T18:13:44.511770428Z stdout F Handling node with IP: 172.17.0.4
2019-05-26T18:13:44.511773338Z stdout F Node kind-worker2 has CIDR 10.244.2.0/24
2019-05-26T18:13:54.603658582Z stdout F Handling node with IP: 172.17.0.3
2019-05-26T18:13:54.603743638Z stdout F Node kind-control-plane has CIDR 10.244.0.0/24
2019-05-26T18:13:54.603797193Z stdout F Handling node with IP: 172.17.0.2
2019-05-26T18:13:54.603804431Z stdout F handling current node
2019-05-26T18:13:54.603810842Z stdout F Handling node with IP: 172.17.0.4
2019-05-26T18:13:54.603815714Z stdout F Node kind-worker2 has CIDR 10.244.2.0/24
2019-05-26T18:13:03.816898132Z stdout F hostIP = 172.17.0.2
2019-05-26T18:13:03.816923861Z stdout F podIP = 172.17.0.2
2019-05-26T18:13:33.813582636Z stderr F panic: Get https://10.96.0.1:443/api/v1/nodes: dial tcp 10.96.0.1:443: i/o timeout
2019-05-26T18:13:33.813619442Z stderr F
2019-05-26T18:13:33.813625312Z stderr F goroutine 1 [running]:
2019-05-26T18:13:33.813628853Z stderr F main.main()
2019-05-26T18:13:33.813635985Z stderr F /src/main.go:84 +0x423
2019-05-26T18:12:46.724764277Z stdout F hostIP = 172.17.0.3
2019-05-26T18:12:46.724821261Z stdout F podIP = 172.17.0.3
2019-05-26T18:13:16.727395792Z stderr F panic: Get https://10.96.0.1:443/api/v1/nodes: dial tcp 10.96.0.1:443: i/o timeout
2019-05-26T18:13:16.727438816Z stderr F
2019-05-26T18:13:16.727446652Z stderr F goroutine 1 [running]:
2019-05-26T18:13:16.727451735Z stderr F main.main()
2019-05-26T18:13:16.727458808Z stderr F /src/main.go:84 +0x423
2019-05-26T18:13:17.908692057Z stdout F hostIP = 172.17.0.3
2019-05-26T18:13:17.908743987Z stdout F podIP = 172.17.0.3
2019-05-26T18:13:17.921381943Z stdout F Handling node with IP: 172.17.0.3
2019-05-26T18:13:17.921426852Z stdout F handling current node
2019-05-26T18:13:18.003185658Z stdout F Handling node with IP: 172.17.0.2
2019-05-26T18:13:18.003218674Z stdout F Node kind-worker has CIDR 10.244.1.0/24
2019-05-26T18:13:18.003949878Z stdout F Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.0.2 Flags: [] Table: 0}
2019-05-26T18:13:18.003968352Z stdout F Handling node with IP: 172.17.0.4
2019-05-26T18:13:18.003974608Z stdout F Node kind-worker2 has CIDR 10.244.2.0/24
2019-05-26T18:13:18.003980729Z stdout F Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.17.0.4 Flags: [] Table: 0}
2019-05-26T18:13:28.008293459Z stdout F Handling node with IP: 172.17.0.3
2019-05-26T18:13:28.008331554Z stdout F handling current node
2019-05-26T18:13:28.00833686Z stdout F Handling node with IP: 172.17.0.2
2019-05-26T18:13:28.008340653Z stdout F Node kind-worker has CIDR 10.244.1.0/24
2019-05-26T18:13:28.008343834Z stdout F Handling node with IP: 172.17.0.4
2019-05-26T18:13:28.008347342Z stdout F Node kind-worker2 has CIDR 10.244.2.0/24
2019-05-26T18:13:38.103383206Z stdout F Handling node with IP: 172.17.0.3
2019-05-26T18:13:38.103441393Z stdout F handling current node
2019-05-26T18:13:38.103447178Z stdout F Handling node with IP: 172.17.0.2
2019-05-26T18:13:38.103450635Z stdout F Node kind-worker has CIDR 10.244.1.0/24
2019-05-26T18:13:38.103495428Z stdout F Handling node with IP: 172.17.0.4
2019-05-26T18:13:38.103524359Z stdout F Node kind-worker2 has CIDR 10.244.2.0/24
2019-05-26T18:13:48.107656815Z stdout F Handling node with IP: 172.17.0.3
2019-05-26T18:13:48.107727371Z stdout F handling current node
2019-05-26T18:13:48.107734244Z stdout F Handling node with IP: 172.17.0.2
2019-05-26T18:13:48.107737258Z stdout F Node kind-worker has CIDR 10.244.1.0/24
2019-05-26T18:13:48.107740613Z stdout F Handling node with IP: 172.17.0.4
2019-05-26T18:13:48.107743449Z stdout F Node kind-worker2 has CIDR 10.244.2.0/24
2019-05-26T18:13:58.111786118Z stdout F Handling node with IP: 172.17.0.3
2019-05-26T18:13:58.111839027Z stdout F handling current node
2019-05-26T18:13:58.111848418Z stdout F Handling node with IP: 172.17.0.2
2019-05-26T18:13:58.11185288Z stdout F Node kind-worker has CIDR 10.244.1.0/24
2019-05-26T18:13:58.111857281Z stdout F Handling node with IP: 172.17.0.4
2019-05-26T18:13:58.111861609Z stdout F Node kind-worker2 has CIDR 10.244.2.0/24
2019-05-26T18:13:03.911767768Z stdout F hostIP = 172.17.0.4
2019-05-26T18:13:03.911797672Z stdout F podIP = 172.17.0.4
2019-05-26T18:13:33.911864313Z stderr F panic: Get https://10.96.0.1:443/api/v1/nodes: dial tcp 10.96.0.1:443: i/o timeout
2019-05-26T18:13:33.911906121Z stderr F
2019-05-26T18:13:33.911911437Z stderr F goroutine 1 [running]:
2019-05-26T18:13:33.911915271Z stderr F main.main()
2019-05-26T18:13:33.911920685Z stderr F /src/main.go:84 +0x423
2019-05-26T18:13:35.00521956Z stdout F hostIP = 172.17.0.4
2019-05-26T18:13:35.005262787Z stdout F podIP = 172.17.0.4
2019-05-26T18:13:35.015647209Z stdout F Handling node with IP: 172.17.0.3
2019-05-26T18:13:35.015678705Z stdout F Node kind-control-plane has CIDR 10.244.0.0/24
2019-05-26T18:13:35.017296794Z stdout F Adding route {Ifindex: 0 Dst: 10.244.0.0/24 Src: <nil> Gw: 172.17.0.3 Flags: [] Table: 0}
2019-05-26T18:13:35.017311249Z stdout F Handling node with IP: 172.17.0.2
2019-05-26T18:13:35.017315392Z stdout F Node kind-worker has CIDR 10.244.1.0/24
2019-05-26T18:13:35.017318311Z stdout F Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.0.2 Flags: [] Table: 0}
2019-05-26T18:13:35.017321559Z stdout F Handling node with IP: 172.17.0.4
2019-05-26T18:13:35.017324463Z stdout F handling current node
2019-05-26T18:13:45.113307597Z stdout F Handling node with IP: 172.17.0.3
2019-05-26T18:13:45.113342109Z stdout F Node kind-control-plane has CIDR 10.244.0.0/24
2019-05-26T18:13:45.113349037Z stdout F Handling node with IP: 172.17.0.2
2019-05-26T18:13:45.113353803Z stdout F Node kind-worker has CIDR 10.244.1.0/24
2019-05-26T18:13:45.113358696Z stdout F Handling node with IP: 172.17.0.4
2019-05-26T18:13:45.113362671Z stdout F handling current node
2019-05-26T18:13:55.116955694Z stdout F Handling node with IP: 172.17.0.3
2019-05-26T18:13:55.11703127Z stdout F Node kind-control-plane has CIDR 10.244.0.0/24
2019-05-26T18:13:55.117098435Z stdout F Handling node with IP: 172.17.0.2
2019-05-26T18:13:55.117104332Z stdout F Node kind-worker has CIDR 10.244.1.0/24
2019-05-26T18:13:55.117108739Z stdout F Handling node with IP: 172.17.0.4
2019-05-26T18:13:55.117124493Z stdout F handling current node
2019-05-26T18:12:20.927847326Z stderr F Flag --insecure-port has been deprecated, This flag will be removed in a future version.
2019-05-26T18:12:20.927901936Z stderr F I0526 18:12:20.919351 1 server.go:559] external host was not specified, using 172.17.0.3
2019-05-26T18:12:20.927910298Z stderr F I0526 18:12:20.919482 1 server.go:146] Version: v1.14.2
2019-05-26T18:12:21.245026019Z stderr F I0526 18:12:21.244845 1 plugins.go:158] Loaded 9 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
2019-05-26T18:12:21.245083604Z stderr F I0526 18:12:21.244878 1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
2019-05-26T18:12:21.245781796Z stderr F E0526 18:12:21.245688 1 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
2019-05-26T18:12:21.245805457Z stderr F E0526 18:12:21.245716 1 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
2019-05-26T18:12:21.245880397Z stderr F E0526 18:12:21.245755 1 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
2019-05-26T18:12:21.245892407Z stderr F E0526 18:12:21.245790 1 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
2019-05-26T18:12:21.245918418Z stderr F E0526 18:12:21.245811 1 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
2019-05-26T18:12:21.245937126Z stderr F E0526 18:12:21.245827 1 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
2019-05-26T18:12:21.245944115Z stderr F I0526 18:12:21.245846 1 plugins.go:158] Loaded 9 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
2019-05-26T18:12:21.245949095Z stderr F I0526 18:12:21.245854 1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
2019-05-26T18:12:21.247909506Z stderr F I0526 18:12:21.247814 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:21.247927835Z stderr F I0526 18:12:21.247833 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:21.247980374Z stderr F I0526 18:12:21.247892 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:21.24801883Z stderr F I0526 18:12:21.247974 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:21.857882351Z stderr F I0526 18:12:21.857751 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:21.858612623Z stderr F I0526 18:12:21.858531 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:21.85865105Z stderr F I0526 18:12:21.858630 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:21.858762047Z stderr F I0526 18:12:21.858725 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:21.858908385Z stderr F I0526 18:12:21.858873 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:21.883224269Z stderr F I0526 18:12:21.883022 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:21.908823227Z stderr F I0526 18:12:21.908669 1 master.go:233] Using reconciler: lease
2019-05-26T18:12:21.909492193Z stderr F I0526 18:12:21.909357 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:21.909510446Z stderr F I0526 18:12:21.909393 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:21.909517069Z stderr F I0526 18:12:21.909435 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:21.909568334Z stderr F I0526 18:12:21.909481 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:21.9276301Z stderr F I0526 18:12:21.927486 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:21.927760781Z stderr F I0526 18:12:21.927727 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:21.928817827Z stderr F I0526 18:12:21.928731 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:21.929049013Z stderr F I0526 18:12:21.929009 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:21.929309877Z stderr F I0526 18:12:21.929268 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:21.94121017Z stderr F I0526 18:12:21.941005 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:21.941243762Z stderr F I0526 18:12:21.941033 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:21.941250542Z stderr F I0526 18:12:21.941124 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:21.94166542Z stderr F I0526 18:12:21.941271 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:21.941821164Z stderr F I0526 18:12:21.941744 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:21.95260983Z stderr F I0526 18:12:21.952469 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:21.952852642Z stderr F I0526 18:12:21.952794 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:21.952878488Z stderr F I0526 18:12:21.952858 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:21.952972829Z stderr F I0526 18:12:21.952933 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:21.953036511Z stderr F I0526 18:12:21.953011 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:21.963440656Z stderr F I0526 18:12:21.963302 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:21.963787447Z stderr F I0526 18:12:21.963721 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:21.963816911Z stderr F I0526 18:12:21.963790 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:21.963913274Z stderr F I0526 18:12:21.963876 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:21.963983139Z stderr F I0526 18:12:21.963955 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:21.974975051Z stderr F I0526 18:12:21.974724 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:21.976059713Z stderr F I0526 18:12:21.975959 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:21.976176647Z stderr F I0526 18:12:21.976010 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:21.976270847Z stderr F I0526 18:12:21.976054 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:21.976278866Z stderr F I0526 18:12:21.976199 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:21.989300923Z stderr F I0526 18:12:21.989114 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:21.990484432Z stderr F I0526 18:12:21.990386 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:21.99052009Z stderr F I0526 18:12:21.990495 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:21.990641781Z stderr F I0526 18:12:21.990587 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:21.990752375Z stderr F I0526 18:12:21.990686 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.003211954Z stderr F I0526 18:12:22.003012 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.003278749Z stderr F I0526 18:12:22.003041 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.003285399Z stderr F I0526 18:12:22.003128 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.003339379Z stderr F I0526 18:12:22.003248 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.003523108Z stderr F I0526 18:12:22.003471 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.01670388Z stderr F I0526 18:12:22.016537 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.01679199Z stderr F I0526 18:12:22.016563 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.016798262Z stderr F I0526 18:12:22.016606 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.016858066Z stderr F I0526 18:12:22.016702 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.017255593Z stderr F I0526 18:12:22.017179 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.030895644Z stderr F I0526 18:12:22.030651 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.030929209Z stderr F I0526 18:12:22.030681 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.0309356Z stderr F I0526 18:12:22.030724 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.030994967Z stderr F I0526 18:12:22.030905 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.031255886Z stderr F I0526 18:12:22.031174 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.044760684Z stderr F I0526 18:12:22.044611 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.04541852Z stderr F I0526 18:12:22.045332 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.045551314Z stderr F I0526 18:12:22.045504 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.045695285Z stderr F I0526 18:12:22.045661 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.045831376Z stderr F I0526 18:12:22.045787 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.06181914Z stderr F I0526 18:12:22.061623 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.061866246Z stderr F I0526 18:12:22.061658 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.06189177Z stderr F I0526 18:12:22.061699 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.061951091Z stderr F I0526 18:12:22.061828 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.062154794Z stderr F I0526 18:12:22.062059 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.075019571Z stderr F I0526 18:12:22.074796 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.075056924Z stderr F I0526 18:12:22.074869 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.075113538Z stderr F I0526 18:12:22.074911 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.07517011Z stderr F I0526 18:12:22.075044 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.0753275Z stderr F I0526 18:12:22.075281 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.088574596Z stderr F I0526 18:12:22.088433 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.089662206Z stderr F I0526 18:12:22.089556 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.089686728Z stderr F I0526 18:12:22.089579 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.089702975Z stderr F I0526 18:12:22.089617 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.089750489Z stderr F I0526 18:12:22.089683 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.102509656Z stderr F I0526 18:12:22.102376 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.10297114Z stderr F I0526 18:12:22.102902 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.10315418Z stderr F I0526 18:12:22.103104 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.103246882Z stderr F I0526 18:12:22.102611 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.103346541Z stderr F I0526 18:12:22.103294 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.116401378Z stderr F I0526 18:12:22.116262 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.11691501Z stderr F I0526 18:12:22.116832 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.117719864Z stderr F I0526 18:12:22.117645 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.118154078Z stderr F I0526 18:12:22.118049 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.118605697Z stderr F I0526 18:12:22.118556 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.138601728Z stderr F I0526 18:12:22.138397 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.138962478Z stderr F I0526 18:12:22.138788 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.138978334Z stderr F I0526 18:12:22.138805 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.139027119Z stderr F I0526 18:12:22.138866 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.139147453Z stderr F I0526 18:12:22.138911 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.151516741Z stderr F I0526 18:12:22.151369 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.151552412Z stderr F I0526 18:12:22.151399 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.151559127Z stderr F I0526 18:12:22.151465 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.151600262Z stderr F I0526 18:12:22.151552 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.151650893Z stderr F I0526 18:12:22.151623 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.167109628Z stderr F I0526 18:12:22.166932 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.247260517Z stderr F I0526 18:12:22.247013 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.247421217Z stderr F I0526 18:12:22.247358 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.247742667Z stderr F I0526 18:12:22.247684 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.249210421Z stderr F I0526 18:12:22.249042 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.256566206Z stderr F I0526 18:12:22.256337 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.256599345Z stderr F I0526 18:12:22.256380 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.256609052Z stderr F I0526 18:12:22.256424 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.259019372Z stderr F I0526 18:12:22.258919 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.271989852Z stderr F I0526 18:12:22.271804 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.273247302Z stderr F I0526 18:12:22.273139 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.273270097Z stderr F I0526 18:12:22.273164 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.273332367Z stderr F I0526 18:12:22.273242 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.273411274Z stderr F I0526 18:12:22.273375 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.273754156Z stderr F I0526 18:12:22.273680 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.288681765Z stderr F I0526 18:12:22.288502 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.289225846Z stderr F I0526 18:12:22.289135 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.289357701Z stderr F I0526 18:12:22.289287 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.289503239Z stderr F I0526 18:12:22.289453 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.289661814Z stderr F I0526 18:12:22.289606 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.302246061Z stderr F I0526 18:12:22.302098 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.303333051Z stderr F I0526 18:12:22.303229 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.303527494Z stderr F I0526 18:12:22.303466 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.303824844Z stderr F I0526 18:12:22.303748 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.304490977Z stderr F I0526 18:12:22.304404 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.316795597Z stderr F I0526 18:12:22.316573 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.3176296Z stderr F I0526 18:12:22.317523 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.317763096Z stderr F I0526 18:12:22.317710 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.31791115Z stderr F I0526 18:12:22.317860 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.318115416Z stderr F I0526 18:12:22.318004 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.333774043Z stderr F I0526 18:12:22.333585 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.333810887Z stderr F I0526 18:12:22.333615 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.333825797Z stderr F I0526 18:12:22.333656 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.333896218Z stderr F I0526 18:12:22.333786 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.334192187Z stderr F I0526 18:12:22.334105 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.3471752Z stderr F I0526 18:12:22.346981 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.347461983Z stderr F I0526 18:12:22.347408 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.347613632Z stderr F I0526 18:12:22.347120 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.347728763Z stderr F I0526 18:12:22.347549 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.34778357Z stderr F I0526 18:12:22.347729 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.359019269Z stderr F I0526 18:12:22.358885 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.359085744Z stderr F I0526 18:12:22.358910 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.359093516Z stderr F I0526 18:12:22.358969 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.359266732Z stderr F I0526 18:12:22.359195 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.359630905Z stderr F I0526 18:12:22.359524 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.371974215Z stderr F I0526 18:12:22.371763 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.372492245Z stderr F I0526 18:12:22.372415 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.372571948Z stderr F I0526 18:12:22.372534 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.372719824Z stderr F I0526 18:12:22.372641 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.37279005Z stderr F I0526 18:12:22.372708 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.384759972Z stderr F I0526 18:12:22.384617 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.385673679Z stderr F I0526 18:12:22.385582 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.385761146Z stderr F I0526 18:12:22.385725 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.385884348Z stderr F I0526 18:12:22.385838 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.386002208Z stderr F I0526 18:12:22.385959 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.397469693Z stderr F I0526 18:12:22.397298 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.398590587Z stderr F I0526 18:12:22.398493 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.398706927Z stderr F I0526 18:12:22.398666 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.398881876Z stderr F I0526 18:12:22.398793 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.399021808Z stderr F I0526 18:12:22.398969 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.410627631Z stderr F I0526 18:12:22.410425 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.410679392Z stderr F I0526 18:12:22.410455 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.410686076Z stderr F I0526 18:12:22.410493 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.410775707Z stderr F I0526 18:12:22.410592 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.41090052Z stderr F I0526 18:12:22.410803 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.42335488Z stderr F I0526 18:12:22.423210 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.424209322Z stderr F I0526 18:12:22.424066 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.424228028Z stderr F I0526 18:12:22.424092 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.424234834Z stderr F I0526 18:12:22.424131 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.424635561Z stderr F I0526 18:12:22.424548 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.436595314Z stderr F I0526 18:12:22.436414 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.437358377Z stderr F I0526 18:12:22.437199 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.437380061Z stderr F I0526 18:12:22.437217 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.437386134Z stderr F I0526 18:12:22.437258 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.437606131Z stderr F I0526 18:12:22.437513 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.450232875Z stderr F I0526 18:12:22.449988 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.450297532Z stderr F I0526 18:12:22.450016 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.450304497Z stderr F I0526 18:12:22.450103 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.450316662Z stderr F I0526 18:12:22.450154 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.45053695Z stderr F I0526 18:12:22.450443 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.462792992Z stderr F I0526 18:12:22.462651 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.464150325Z stderr F I0526 18:12:22.463994 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.464263809Z stderr F I0526 18:12:22.464227 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.46438852Z stderr F I0526 18:12:22.464344 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.464541548Z stderr F I0526 18:12:22.464494 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.476928354Z stderr F I0526 18:12:22.476775 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.477821285Z stderr F I0526 18:12:22.477734 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.477907912Z stderr F I0526 18:12:22.477871 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.478015919Z stderr F I0526 18:12:22.477981 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.478211552Z stderr F I0526 18:12:22.478167 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.491409434Z stderr F I0526 18:12:22.491142 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.491883603Z stderr F I0526 18:12:22.491818 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.491959721Z stderr F I0526 18:12:22.491926 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.49211472Z stderr F I0526 18:12:22.492071 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.492250504Z stderr F I0526 18:12:22.492209 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.50701469Z stderr F I0526 18:12:22.506776 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.507091852Z stderr F I0526 18:12:22.506814 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.507100056Z stderr F I0526 18:12:22.506909 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.507194137Z stderr F I0526 18:12:22.507146 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.507520683Z stderr F I0526 18:12:22.507444 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.519654413Z stderr F I0526 18:12:22.519469 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.519690713Z stderr F I0526 18:12:22.519499 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.519698371Z stderr F I0526 18:12:22.519538 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.519703502Z stderr F I0526 18:12:22.519577 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.520101825Z stderr F I0526 18:12:22.519946 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.533556912Z stderr F I0526 18:12:22.533352 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.53360635Z stderr F I0526 18:12:22.533382 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.533613743Z stderr F I0526 18:12:22.533428 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.533779973Z stderr F I0526 18:12:22.533699 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.534202288Z stderr F I0526 18:12:22.534138 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.546570965Z stderr F I0526 18:12:22.546433 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.547863145Z stderr F I0526 18:12:22.547745 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.548011981Z stderr F I0526 18:12:22.547959 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.54819575Z stderr F I0526 18:12:22.548154 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.548314871Z stderr F I0526 18:12:22.548276 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.56754518Z stderr F I0526 18:12:22.567406 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.567599725Z stderr F I0526 18:12:22.567434 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.567654637Z stderr F I0526 18:12:22.567499 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.567690735Z stderr F I0526 18:12:22.567675 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.56800902Z stderr F I0526 18:12:22.567960 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.586649975Z stderr F I0526 18:12:22.586491 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.586703129Z stderr F I0526 18:12:22.586629 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.586775749Z stderr F I0526 18:12:22.586723 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.58769065Z stderr F I0526 18:12:22.587569 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.588155559Z stderr F I0526 18:12:22.588016 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.608290287Z stderr F I0526 18:12:22.608065 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.608329382Z stderr F I0526 18:12:22.608096 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.608337267Z stderr F I0526 18:12:22.608138 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.608466154Z stderr F I0526 18:12:22.608422 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.6087254Z stderr F I0526 18:12:22.608648 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.623213293Z stderr F I0526 18:12:22.623104 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.623438744Z stderr F I0526 18:12:22.623368 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.623460922Z stderr F I0526 18:12:22.623432 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.623570984Z stderr F I0526 18:12:22.623510 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.623686056Z stderr F I0526 18:12:22.623637 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.635733098Z stderr F I0526 18:12:22.635602 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.635769592Z stderr F I0526 18:12:22.635629 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.635832438Z stderr F I0526 18:12:22.635798 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.636271254Z stderr F I0526 18:12:22.636211 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.636391221Z stderr F I0526 18:12:22.636353 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.650266105Z stderr F I0526 18:12:22.650096 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.650882639Z stderr F I0526 18:12:22.650763 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.650913206Z stderr F I0526 18:12:22.650885 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.651002045Z stderr F I0526 18:12:22.650966 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.651145787Z stderr F I0526 18:12:22.651110 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.67500498Z stderr F I0526 18:12:22.674807 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.677539534Z stderr F I0526 18:12:22.677428 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.677651838Z stderr F I0526 18:12:22.677624 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.677755401Z stderr F I0526 18:12:22.677733 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.677864548Z stderr F I0526 18:12:22.677832 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.690935568Z stderr F I0526 18:12:22.690717 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.690973927Z stderr F I0526 18:12:22.690745 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.690980701Z stderr F I0526 18:12:22.690788 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.691188252Z stderr F I0526 18:12:22.691121 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.691405747Z stderr F I0526 18:12:22.691347 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.704961484Z stderr F I0526 18:12:22.704794 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.705827843Z stderr F I0526 18:12:22.705752 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.705848634Z stderr F I0526 18:12:22.705766 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.706325926Z stderr F I0526 18:12:22.706233 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.706498292Z stderr F I0526 18:12:22.706433 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.719462596Z stderr F I0526 18:12:22.719319 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.720254758Z stderr F I0526 18:12:22.720164 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.720376206Z stderr F I0526 18:12:22.720337 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.720499878Z stderr F I0526 18:12:22.720454 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.720625077Z stderr F I0526 18:12:22.720581 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.732857684Z stderr F I0526 18:12:22.732692 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.732896186Z stderr F I0526 18:12:22.732721 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.732902829Z stderr F I0526 18:12:22.732763 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.733138991Z stderr F I0526 18:12:22.733054 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.733415401Z stderr F I0526 18:12:22.733326 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.745420142Z stderr F I0526 18:12:22.745255 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.745456177Z stderr F I0526 18:12:22.745292 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.745615196Z stderr F I0526 18:12:22.745548 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.74613955Z stderr F I0526 18:12:22.745997 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.746358938Z stderr F I0526 18:12:22.746271 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.757915799Z stderr F I0526 18:12:22.757746 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.75892178Z stderr F I0526 18:12:22.758781 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.758965728Z stderr F I0526 18:12:22.758933 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.759140544Z stderr F I0526 18:12:22.759096 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.760849365Z stderr F I0526 18:12:22.760771 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.77209568Z stderr F I0526 18:12:22.771882 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.772130286Z stderr F I0526 18:12:22.771909 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.772137673Z stderr F I0526 18:12:22.771949 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.772196541Z stderr F I0526 18:12:22.772095 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.772399179Z stderr F I0526 18:12:22.772305 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.783711371Z stderr F I0526 18:12:22.783547 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.784597033Z stderr F I0526 18:12:22.784503 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.784618779Z stderr F I0526 18:12:22.784523 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.784663998Z stderr F I0526 18:12:22.784595 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.784695007Z stderr F I0526 18:12:22.784675 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.796515591Z stderr F I0526 18:12:22.796381 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.797447065Z stderr F I0526 18:12:22.797362 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.797565682Z stderr F I0526 18:12:22.797524 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.797694291Z stderr F I0526 18:12:22.797655 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.79779912Z stderr F I0526 18:12:22.797765 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.819565523Z stderr F I0526 18:12:22.819387 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.820102831Z stderr F I0526 18:12:22.819968 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.820187177Z stderr F I0526 18:12:22.820150 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.820304375Z stderr F I0526 18:12:22.820255 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.820432553Z stderr F I0526 18:12:22.820388 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.831646126Z stderr F I0526 18:12:22.831489 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.832091954Z stderr F I0526 18:12:22.831968 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.832184444Z stderr F I0526 18:12:22.832137 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.832310348Z stderr F I0526 18:12:22.832263 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.832471728Z stderr F I0526 18:12:22.832415 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.844304279Z stderr F I0526 18:12:22.844117 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.844730008Z stderr F I0526 18:12:22.844659 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.844755299Z stderr F I0526 18:12:22.844725 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.844828403Z stderr F I0526 18:12:22.844789 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.844911556Z stderr F I0526 18:12:22.844878 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.85873579Z stderr F I0526 18:12:22.858555 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.858771828Z stderr F I0526 18:12:22.858583 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.858788124Z stderr F I0526 18:12:22.858623 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.858906232Z stderr F I0526 18:12:22.858723 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.859097852Z stderr F I0526 18:12:22.858961 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.871911322Z stderr F I0526 18:12:22.871747 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.872555547Z stderr F I0526 18:12:22.872451 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.87257431Z stderr F I0526 18:12:22.872471 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.8726318Z stderr F I0526 18:12:22.872507 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.872639839Z stderr F I0526 18:12:22.872581 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.885845403Z stderr F I0526 18:12:22.885648 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.885883285Z stderr F I0526 18:12:22.885679 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.885906083Z stderr F I0526 18:12:22.885722 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.886174021Z stderr F I0526 18:12:22.886098 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.886410761Z stderr F I0526 18:12:22.886335 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.900277609Z stderr F I0526 18:12:22.900114 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.901869373Z stderr F I0526 18:12:22.901745 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.901980972Z stderr F I0526 18:12:22.901952 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.902190363Z stderr F I0526 18:12:22.902125 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.902345388Z stderr F I0526 18:12:22.902292 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.918086525Z stderr F I0526 18:12:22.917892 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.918931928Z stderr F I0526 18:12:22.918800 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.918951526Z stderr F I0526 18:12:22.918819 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.919121981Z stderr F I0526 18:12:22.919046 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.919199728Z stderr F I0526 18:12:22.919168 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.931813957Z stderr F I0526 18:12:22.931646 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.932595164Z stderr F I0526 18:12:22.932483 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.932613138Z stderr F I0526 18:12:22.932503 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.932619647Z stderr F I0526 18:12:22.932539 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.932736966Z stderr F I0526 18:12:22.932665 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.946044314Z stderr F I0526 18:12:22.945866 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.947095562Z stderr F I0526 18:12:22.946966 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.947120035Z stderr F I0526 18:12:22.946986 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.947175212Z stderr F I0526 18:12:22.947092 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.947205786Z stderr F I0526 18:12:22.947167 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.962233252Z stderr F I0526 18:12:22.962078 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.962669974Z stderr F I0526 18:12:22.962597 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.962687355Z stderr F I0526 18:12:22.962614 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.96273097Z stderr F I0526 18:12:22.962668 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.962777239Z stderr F I0526 18:12:22.962734 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.976460012Z stderr F I0526 18:12:22.976283 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.976496919Z stderr F I0526 18:12:22.976312 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.976503668Z stderr F I0526 18:12:22.976351 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.97662848Z stderr F I0526 18:12:22.976554 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.976770215Z stderr F I0526 18:12:22.976707 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.987935165Z stderr F I0526 18:12:22.987774 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:22.987973406Z stderr F I0526 18:12:22.987801 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:22.987982995Z stderr F I0526 18:12:22.987841 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:22.988093033Z stderr F I0526 18:12:22.988039 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:22.988514485Z stderr F I0526 18:12:22.988413 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:23.000145452Z stderr F I0526 18:12:22.999980 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:23.000893271Z stderr F I0526 18:12:23.000809 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:23.000911968Z stderr F I0526 18:12:23.000825 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:23.000961009Z stderr F I0526 18:12:23.000862 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:23.001244582Z stderr F I0526 18:12:23.001181 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:23.015200641Z stderr F I0526 18:12:23.015025 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:23.163780083Z stderr F W0526 18:12:23.163602 1 genericapiserver.go:344] Skipping API batch/v2alpha1 because it has no resources.
2019-05-26T18:12:23.173048772Z stderr F W0526 18:12:23.172737 1 genericapiserver.go:344] Skipping API node.k8s.io/v1alpha1 because it has no resources.
2019-05-26T18:12:23.176541133Z stderr F W0526 18:12:23.176389 1 genericapiserver.go:344] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
2019-05-26T18:12:23.177378028Z stderr F W0526 18:12:23.177268 1 genericapiserver.go:344] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
2019-05-26T18:12:23.179312763Z stderr F W0526 18:12:23.179156 1 genericapiserver.go:344] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
2019-05-26T18:12:23.981714907Z stderr F E0526 18:12:23.981324 1 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
2019-05-26T18:12:23.982149088Z stderr F E0526 18:12:23.981378 1 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
2019-05-26T18:12:23.982255536Z stderr F E0526 18:12:23.981481 1 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
2019-05-26T18:12:23.98230907Z stderr F E0526 18:12:23.981564 1 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
2019-05-26T18:12:23.982315603Z stderr F E0526 18:12:23.981604 1 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
2019-05-26T18:12:23.982340688Z stderr F E0526 18:12:23.981622 1 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
2019-05-26T18:12:23.982346931Z stderr F I0526 18:12:23.981650 1 plugins.go:158] Loaded 9 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
2019-05-26T18:12:23.98235295Z stderr F I0526 18:12:23.981658 1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
2019-05-26T18:12:23.983966843Z stderr F I0526 18:12:23.983805 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:23.983989324Z stderr F I0526 18:12:23.983831 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:23.983995472Z stderr F I0526 18:12:23.983877 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:23.984063366Z stderr F I0526 18:12:23.984006 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:23.995302392Z stderr F I0526 18:12:23.995113 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:23.996407069Z stderr F I0526 18:12:23.996301 1 client.go:352] parsed scheme: ""
2019-05-26T18:12:23.996443069Z stderr F I0526 18:12:23.996420 1 client.go:352] scheme "" not registered, fallback to default scheme
2019-05-26T18:12:23.996540805Z stderr F I0526 18:12:23.996494 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
2019-05-26T18:12:23.996626271Z stderr F I0526 18:12:23.996583 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:24.008356387Z stderr F I0526 18:12:24.008217 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
2019-05-26T18:12:25.620693646Z stderr F I0526 18:12:25.620517 1 secure_serving.go:116] Serving securely on [::]:6443
2019-05-26T18:12:25.624364463Z stderr F I0526 18:12:25.621927 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
2019-05-26T18:12:25.6243932Z stderr F I0526 18:12:25.621967 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
2019-05-26T18:12:25.624399455Z stderr F I0526 18:12:25.621982 1 available_controller.go:320] Starting AvailableConditionController
2019-05-26T18:12:25.624403414Z stderr F I0526 18:12:25.621992 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
2019-05-26T18:12:25.624407542Z stderr F I0526 18:12:25.622004 1 controller.go:81] Starting OpenAPI AggregationController
2019-05-26T18:12:25.624414369Z stderr F I0526 18:12:25.622042 1 autoregister_controller.go:139] Starting autoregister controller
2019-05-26T18:12:25.624418332Z stderr F I0526 18:12:25.622054 1 cache.go:32] Waiting for caches to sync for autoregister controller
2019-05-26T18:12:25.624422962Z stderr F I0526 18:12:25.622073 1 crd_finalizer.go:242] Starting CRDFinalizer
2019-05-26T18:12:25.624427787Z stderr F I0526 18:12:25.622872 1 crdregistration_controller.go:112] Starting crd-autoregister controller
2019-05-26T18:12:25.624432065Z stderr F I0526 18:12:25.622883 1 controller_utils.go:1027] Waiting for caches to sync for crd-autoregister controller
2019-05-26T18:12:25.624437818Z stderr F I0526 18:12:25.622898 1 customresource_discovery_controller.go:208] Starting DiscoveryController
2019-05-26T18:12:25.624455607Z stderr F I0526 18:12:25.622919 1 naming_controller.go:284] Starting NamingConditionController
2019-05-26T18:12:25.624459756Z stderr F I0526 18:12:25.622929 1 establishing_controller.go:73] Starting EstablishingController
2019-05-26T18:12:25.668996976Z stderr F E0526 18:12:25.668846 1 controller.go:148] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.3, ResourceVersion: 0, AdditionalErrorMsg:
2019-05-26T18:12:25.922720111Z stderr F I0526 18:12:25.922554 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
2019-05-26T18:12:25.922915146Z stderr F I0526 18:12:25.922821 1 cache.go:39] Caches are synced for AvailableConditionController controller
2019-05-26T18:12:25.923052775Z stderr F I0526 18:12:25.922985 1 cache.go:39] Caches are synced for autoregister controller
2019-05-26T18:12:25.926413323Z stderr F I0526 18:12:25.926306 1 controller_utils.go:1034] Caches are synced for crd-autoregister controller
2019-05-26T18:12:25.980311298Z stderr F I0526 18:12:25.980173 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
2019-05-26T18:12:26.618977688Z stderr F I0526 18:12:26.618775 1 controller.go:107] OpenAPI AggregationController: Processing item
2019-05-26T18:12:26.619052942Z stderr F I0526 18:12:26.618816 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
2019-05-26T18:12:26.619060805Z stderr F I0526 18:12:26.618856 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
2019-05-26T18:12:26.641141124Z stderr F I0526 18:12:26.641001 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/cluster-admin
2019-05-26T18:12:26.64407857Z stderr F I0526 18:12:26.643961 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:discovery
2019-05-26T18:12:26.646918906Z stderr F I0526 18:12:26.646816 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:basic-user
2019-05-26T18:12:26.650075069Z stderr F I0526 18:12:26.649979 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
2019-05-26T18:12:26.653408588Z stderr F I0526 18:12:26.653274 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/admin
2019-05-26T18:12:26.657643305Z stderr F I0526 18:12:26.657543 1 storage_scheduling.go:113] created PriorityClass system-node-critical with value 2000001000
2019-05-26T18:12:26.658238292Z stderr F I0526 18:12:26.658149 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/edit
2019-05-26T18:12:26.66264566Z stderr F I0526 18:12:26.662552 1 storage_scheduling.go:113] created PriorityClass system-cluster-critical with value 2000000000
2019-05-26T18:12:26.662666793Z stderr F I0526 18:12:26.662578 1 storage_scheduling.go:122] all system priority classes are created successfully or already exist.
2019-05-26T18:12:26.663305317Z stderr F I0526 18:12:26.663216 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/view
2019-05-26T18:12:26.669452985Z stderr F I0526 18:12:26.669328 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
2019-05-26T18:12:26.673054604Z stderr F I0526 18:12:26.672907 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
2019-05-26T18:12:26.676189712Z stderr F I0526 18:12:26.676089 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
2019-05-26T18:12:26.679226575Z stderr F I0526 18:12:26.679127 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:heapster
2019-05-26T18:12:26.682788084Z stderr F I0526 18:12:26.682682 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node
2019-05-26T18:12:26.686787262Z stderr F I0526 18:12:26.686679 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
2019-05-26T18:12:26.690126261Z stderr F I0526 18:12:26.690012 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
2019-05-26T18:12:26.693438423Z stderr F I0526 18:12:26.693324 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
2019-05-26T18:12:26.69760573Z stderr F I0526 18:12:26.697464 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
2019-05-26T18:12:26.700747435Z stderr F I0526 18:12:26.700642 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
2019-05-26T18:12:26.703678942Z stderr F I0526 18:12:26.703579 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
2019-05-26T18:12:26.706794757Z stderr F I0526 18:12:26.706712 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
2019-05-26T18:12:26.709785552Z stderr F I0526 18:12:26.709663 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
2019-05-26T18:12:26.712876847Z stderr F I0526 18:12:26.712802 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
2019-05-26T18:12:26.716116589Z stderr F I0526 18:12:26.716023 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
2019-05-26T18:12:26.719104767Z stderr F I0526 18:12:26.719011 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
2019-05-26T18:12:26.722704027Z stderr F I0526 18:12:26.722587 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
2019-05-26T18:12:26.72594366Z stderr F I0526 18:12:26.725840 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
2019-05-26T18:12:26.729050841Z stderr F I0526 18:12:26.728883 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
2019-05-26T18:12:26.732048629Z stderr F I0526 18:12:26.731957 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
2019-05-26T18:12:26.735337322Z stderr F I0526 18:12:26.735244 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
2019-05-26T18:12:26.738198215Z stderr F I0526 18:12:26.738107 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
2019-05-26T18:12:26.741755244Z stderr F I0526 18:12:26.741631 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
2019-05-26T18:12:26.744543211Z stderr F I0526 18:12:26.744453 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
2019-05-26T18:12:26.747690409Z stderr F I0526 18:12:26.747618 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
2019-05-26T18:12:26.751104243Z stderr F I0526 18:12:26.751016 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
2019-05-26T18:12:26.754204666Z stderr F I0526 18:12:26.754110 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
2019-05-26T18:12:26.75696883Z stderr F I0526 18:12:26.756869 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
2019-05-26T18:12:26.760073755Z stderr F I0526 18:12:26.759977 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
2019-05-26T18:12:26.763397184Z stderr F I0526 18:12:26.763318 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
2019-05-26T18:12:26.766530784Z stderr F I0526 18:12:26.766422 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
2019-05-26T18:12:26.772496987Z stderr F I0526 18:12:26.772376 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
2019-05-26T18:12:26.775696702Z stderr F I0526 18:12:26.775593 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
2019-05-26T18:12:26.778777433Z stderr F I0526 18:12:26.778671 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
2019-05-26T18:12:26.785560306Z stderr F I0526 18:12:26.785461 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
2019-05-26T18:12:26.79464787Z stderr F I0526 18:12:26.794426 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
2019-05-26T18:12:26.797701658Z stderr F I0526 18:12:26.797587 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
2019-05-26T18:12:26.800760622Z stderr F I0526 18:12:26.800650 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
2019-05-26T18:12:26.803768805Z stderr F I0526 18:12:26.803690 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
2019-05-26T18:12:26.806632894Z stderr F I0526 18:12:26.806536 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
2019-05-26T18:12:26.80984136Z stderr F I0526 18:12:26.809767 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
2019-05-26T18:12:26.813377104Z stderr F I0526 18:12:26.813307 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
2019-05-26T18:12:26.816842281Z stderr F I0526 18:12:26.816774 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
2019-05-26T18:12:26.846955334Z stderr F I0526 18:12:26.846808 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
2019-05-26T18:12:26.883829512Z stderr F I0526 18:12:26.883727 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
2019-05-26T18:12:26.924623243Z stderr F I0526 18:12:26.924468 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
2019-05-26T18:12:26.964275722Z stderr F I0526 18:12:26.964123 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
2019-05-26T18:12:27.004443834Z stderr F I0526 18:12:27.004269 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
2019-05-26T18:12:27.043892181Z stderr F I0526 18:12:27.043748 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
2019-05-26T18:12:27.086121014Z stderr F I0526 18:12:27.085972 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
2019-05-26T18:12:27.124802489Z stderr F I0526 18:12:27.124640 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
2019-05-26T18:12:27.165726323Z stderr F I0526 18:12:27.165567 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
2019-05-26T18:12:27.20417664Z stderr F I0526 18:12:27.204007 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
2019-05-26T18:12:27.244315015Z stderr F I0526 18:12:27.244162 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
2019-05-26T18:12:27.283645724Z stderr F I0526 18:12:27.283471 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
2019-05-26T18:12:27.324261443Z stderr F I0526 18:12:27.324059 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
2019-05-26T18:12:27.364477939Z stderr F I0526 18:12:27.364291 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
2019-05-26T18:12:27.404169816Z stderr F I0526 18:12:27.403998 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node
2019-05-26T18:12:27.444294027Z stderr F I0526 18:12:27.444123 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
2019-05-26T18:12:27.484338924Z stderr F I0526 18:12:27.484169 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
2019-05-26T18:12:27.524007864Z stderr F I0526 18:12:27.523833 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
2019-05-26T18:12:27.564218974Z stderr F I0526 18:12:27.564018 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
2019-05-26T18:12:27.604189833Z stderr F I0526 18:12:27.604024 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
2019-05-26T18:12:27.644021093Z stderr F I0526 18:12:27.643831 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
2019-05-26T18:12:27.690998579Z stderr F I0526 18:12:27.683874 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
2019-05-26T18:12:27.731861784Z stderr F I0526 18:12:27.731682 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
2019-05-26T18:12:27.765339385Z stderr F I0526 18:12:27.765147 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
2019-05-26T18:12:27.804188928Z stderr F I0526 18:12:27.804015 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
2019-05-26T18:12:27.844109136Z stderr F I0526 18:12:27.843849 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
2019-05-26T18:12:27.884230404Z stderr F I0526 18:12:27.884042 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
2019-05-26T18:12:27.924271629Z stderr F I0526 18:12:27.924111 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
2019-05-26T18:12:27.964415197Z stderr F I0526 18:12:27.964218 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
2019-05-26T18:12:28.004754952Z stderr F I0526 18:12:28.004528 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
2019-05-26T18:12:28.044199268Z stderr F I0526 18:12:28.044054 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
2019-05-26T18:12:28.083759819Z stderr F I0526 18:12:28.083611 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
2019-05-26T18:12:28.124430757Z stderr F I0526 18:12:28.124292 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
2019-05-26T18:12:28.16437405Z stderr F I0526 18:12:28.164214 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
2019-05-26T18:12:28.204198978Z stderr F I0526 18:12:28.204038 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
2019-05-26T18:12:28.244297747Z stderr F I0526 18:12:28.244060 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
2019-05-26T18:12:28.284148534Z stderr F I0526 18:12:28.283964 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
2019-05-26T18:12:28.324007526Z stderr F I0526 18:12:28.323856 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
2019-05-26T18:12:28.364056039Z stderr F I0526 18:12:28.363882 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
2019-05-26T18:12:28.404598267Z stderr F I0526 18:12:28.404407 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
2019-05-26T18:12:28.444254945Z stderr F I0526 18:12:28.444053 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
2019-05-26T18:12:28.482046561Z stderr F I0526 18:12:28.481890 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
2019-05-26T18:12:28.484683333Z stderr F I0526 18:12:28.484552 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
2019-05-26T18:12:28.526230016Z stderr F I0526 18:12:28.526065 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
2019-05-26T18:12:28.564270514Z stderr F I0526 18:12:28.564063 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
2019-05-26T18:12:28.60445003Z stderr F I0526 18:12:28.604284 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
2019-05-26T18:12:28.644433579Z stderr F I0526 18:12:28.644268 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
2019-05-26T18:12:28.683956287Z stderr F I0526 18:12:28.683812 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
2019-05-26T18:12:28.725807417Z stderr F I0526 18:12:28.725654 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
2019-05-26T18:12:28.762110097Z stderr F I0526 18:12:28.761948 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
2019-05-26T18:12:28.764775296Z stderr F I0526 18:12:28.764622 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
2019-05-26T18:12:28.804464902Z stderr F I0526 18:12:28.804315 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
2019-05-26T18:12:28.816241338Z stderr F I0526 18:12:28.816106 1 controller.go:606] quota admission added evaluator for: endpoints
2019-05-26T18:12:28.846468199Z stderr F I0526 18:12:28.846334 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
2019-05-26T18:12:28.884010544Z stderr F I0526 18:12:28.883818 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
2019-05-26T18:12:28.924510937Z stderr F I0526 18:12:28.924306 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
2019-05-26T18:12:28.964292669Z stderr F I0526 18:12:28.964137 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
2019-05-26T18:12:29.004346817Z stderr F I0526 18:12:29.004125 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
2019-05-26T18:12:29.040879555Z stderr F W0526 18:12:29.040705 1 lease.go:222] Resetting endpoints for master service "kubernetes" to [172.17.0.3]
2019-05-26T18:12:30.037283972Z stderr F I0526 18:12:30.037149 1 controller.go:606] quota admission added evaluator for: serviceaccounts
2019-05-26T18:12:30.331441835Z stderr F I0526 18:12:30.331258 1 controller.go:606] quota admission added evaluator for: deployments.apps
2019-05-26T18:12:30.666794536Z stderr F I0526 18:12:30.666593 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
2019-05-26T18:12:31.338169339Z stderr F I0526 18:12:31.337984 1 controller.go:606] quota admission added evaluator for: daemonsets.extensions
2019-05-26T18:12:45.099110599Z stderr F I0526 18:12:45.098979 1 controller.go:606] quota admission added evaluator for: replicasets.apps
2019-05-26T18:12:45.381012894Z stderr F I0526 18:12:45.380860 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
2019-05-26T18:13:36.553942222Z stderr F I0526 18:13:36.549442 1 trace.go:81] Trace[2038030302]: "Get /api/v1/namespaces/kube-system/pods/kindnet-xcdxm" (started: 2019-05-26 18:13:35.62133341 +0000 UTC m=+74.791742356) (total time: 928.076993ms):
2019-05-26T18:13:36.55397148Z stderr F Trace[2038030302]: [927.726313ms] [927.709813ms] About to write a response
2019-05-26T18:13:36.55397689Z stderr F I0526 18:13:36.551389 1 trace.go:81] Trace[511803482]: "Get /api/v1/namespaces/kube-system/endpoints/kube-scheduler" (started: 2019-05-26 18:13:35.799852977 +0000 UTC m=+74.970261935) (total time: 751.512327ms):
2019-05-26T18:13:36.553980081Z stderr F Trace[511803482]: [751.433172ms] [751.35883ms] About to write a response
2019-05-26T18:13:37.576548885Z stderr F I0526 18:13:37.576432 1 trace.go:81] Trace[959190432]: "GuaranteedUpdate etcd3: *core.Endpoints" (started: 2019-05-26 18:13:36.552584981 +0000 UTC m=+75.722993949) (total time: 1.023811534s):
2019-05-26T18:13:37.576566523Z stderr F Trace[959190432]: [1.023775119s] [1.023625811s] Transaction committed
2019-05-26T18:13:37.576945371Z stderr F I0526 18:13:37.576871 1 trace.go:81] Trace[306142073]: "Update /api/v1/namespaces/kube-system/endpoints/kube-scheduler" (started: 2019-05-26 18:13:36.552505443 +0000 UTC m=+75.722914406) (total time: 1.024348593s):
2019-05-26T18:13:37.576957222Z stderr F Trace[306142073]: [1.023957283s] [1.023903937s] Object stored in database
2019-05-26T18:13:37.580243328Z stderr F I0526 18:13:37.580135 1 trace.go:81] Trace[995041907]: "GuaranteedUpdate etcd3: *core.Pod" (started: 2019-05-26 18:13:36.552257392 +0000 UTC m=+75.722666363) (total time: 1.027846746s):
2019-05-26T18:13:37.58025922Z stderr F Trace[995041907]: [1.027759032s] [1.02674348s] Transaction committed
2019-05-26T18:13:37.580722784Z stderr F I0526 18:13:37.580657 1 trace.go:81] Trace[1212307604]: "Patch /api/v1/namespaces/kube-system/pods/kindnet-xcdxm/status" (started: 2019-05-26 18:13:36.551486273 +0000 UTC m=+75.721895215) (total time: 1.029154884s):
2019-05-26T18:13:37.58073491Z stderr F Trace[1212307604]: [1.02873576s] [1.027146547s] Object stored in database
2019-05-26T18:14:01.876974667Z stderr F E0526 18:14:01.871921 1 upgradeaware.go:384] Error proxying data from backend to client: tls: use of closed connection
2019-05-26T18:12:19.690390247Z stderr F I0526 18:12:19.686684 1 serving.go:319] Generated self-signed cert in-memory
2019-05-26T18:12:20.611476566Z stderr F I0526 18:12:20.594241 1 controllermanager.go:155] Version: v1.14.2
2019-05-26T18:12:20.611501146Z stderr F I0526 18:12:20.595246 1 secure_serving.go:116] Serving securely on 127.0.0.1:10257
2019-05-26T18:12:20.611506592Z stderr F I0526 18:12:20.596139 1 deprecated_insecure_serving.go:51] Serving insecurely on [::]:10252
2019-05-26T18:12:20.61151495Z stderr F I0526 18:12:20.596354 1 leaderelection.go:217] attempting to acquire leader lease kube-system/kube-controller-manager...
2019-05-26T18:12:20.611520605Z stderr F E0526 18:12:20.598526 1 leaderelection.go:306] error retrieving resource lock kube-system/kube-controller-manager: Get https://172.17.0.3:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 172.17.0.3:6443: connect: connection refused
2019-05-26T18:12:25.85502377Z stderr F E0526 18:12:25.854862 1 leaderelection.go:306] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "endpoints" in API group "" in the namespace "kube-system"
2019-05-26T18:12:28.818647152Z stderr F I0526 18:12:28.818495 1 leaderelection.go:227] successfully acquired lease kube-system/kube-controller-manager
2019-05-26T18:12:28.819150389Z stderr F I0526 18:12:28.819044 1 event.go:209] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"d28a5129-7fe1-11e9-83f7-0242ac110003", APIVersion:"v1", ResourceVersion:"144", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kind-control-plane_cda416f7-7fe1-11e9-9009-0242ac110003 became leader
2019-05-26T18:12:30.029006659Z stderr F I0526 18:12:30.028705 1 plugins.go:103] No cloud provider specified.
2019-05-26T18:12:30.033980375Z stderr F I0526 18:12:30.031106 1 controller_utils.go:1027] Waiting for caches to sync for tokens controller
2019-05-26T18:12:30.131464446Z stderr F I0526 18:12:30.131294 1 controller_utils.go:1034] Caches are synced for tokens controller
2019-05-26T18:12:30.149570892Z stderr F I0526 18:12:30.149427 1 controllermanager.go:497] Started "replicationcontroller"
2019-05-26T18:12:30.150417108Z stderr F W0526 18:12:30.150327 1 core.go:175] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.
2019-05-26T18:12:30.15043546Z stderr F W0526 18:12:30.150348 1 controllermanager.go:489] Skipping "route"
2019-05-26T18:12:30.150654782Z stderr F I0526 18:12:30.150601 1 replica_set.go:182] Starting replicationcontroller controller
2019-05-26T18:12:30.150755425Z stderr F I0526 18:12:30.150721 1 controller_utils.go:1027] Waiting for caches to sync for ReplicationController controller
2019-05-26T18:12:30.174687349Z stderr F I0526 18:12:30.174537 1 controllermanager.go:497] Started "daemonset"
2019-05-26T18:12:30.175858478Z stderr F I0526 18:12:30.175778 1 daemon_controller.go:267] Starting daemon sets controller
2019-05-26T18:12:30.175906271Z stderr F I0526 18:12:30.175829 1 controller_utils.go:1027] Waiting for caches to sync for daemon sets controller
2019-05-26T18:12:30.195038096Z stderr F I0526 18:12:30.194913 1 controllermanager.go:497] Started "deployment"
2019-05-26T18:12:30.195139123Z stderr F I0526 18:12:30.195094 1 deployment_controller.go:152] Starting deployment controller
2019-05-26T18:12:30.19518927Z stderr F I0526 18:12:30.195153 1 controller_utils.go:1027] Waiting for caches to sync for deployment controller
2019-05-26T18:12:30.231906972Z stderr F I0526 18:12:30.231777 1 controllermanager.go:497] Started "horizontalpodautoscaling"
2019-05-26T18:12:30.232015255Z stderr F I0526 18:12:30.231872 1 horizontal.go:156] Starting HPA controller
2019-05-26T18:12:30.232049895Z stderr F I0526 18:12:30.232002 1 controller_utils.go:1027] Waiting for caches to sync for HPA controller
2019-05-26T18:12:30.255152379Z stderr F W0526 18:12:30.252274 1 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
2019-05-26T18:12:30.255208468Z stderr F I0526 18:12:30.254013 1 controllermanager.go:497] Started "attachdetach"
2019-05-26T18:12:30.255216245Z stderr F W0526 18:12:30.254036 1 controllermanager.go:489] Skipping "ttl-after-finished"
2019-05-26T18:12:30.255221792Z stderr F I0526 18:12:30.254110 1 attach_detach_controller.go:323] Starting attach detach controller
2019-05-26T18:12:30.255236223Z stderr F I0526 18:12:30.254119 1 controller_utils.go:1027] Waiting for caches to sync for attach detach controller
2019-05-26T18:12:30.50346038Z stderr F I0526 18:12:30.503288 1 controllermanager.go:497] Started "namespace"
2019-05-26T18:12:30.503509187Z stderr F I0526 18:12:30.503354 1 namespace_controller.go:186] Starting namespace controller
2019-05-26T18:12:30.50355857Z stderr F I0526 18:12:30.503412 1 controller_utils.go:1027] Waiting for caches to sync for namespace controller
2019-05-26T18:12:31.290475212Z stderr F I0526 18:12:31.290317 1 controllermanager.go:497] Started "garbagecollector"
2019-05-26T18:12:31.29104593Z stderr F I0526 18:12:31.290965 1 garbagecollector.go:130] Starting garbage collector controller
2019-05-26T18:12:31.292721156Z stderr F I0526 18:12:31.292633 1 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
2019-05-26T18:12:31.292782313Z stderr F I0526 18:12:31.292747 1 graph_builder.go:308] GraphBuilder running
2019-05-26T18:12:31.322854499Z stderr F I0526 18:12:31.322689 1 controllermanager.go:497] Started "replicaset"
2019-05-26T18:12:31.323027739Z stderr F I0526 18:12:31.322955 1 replica_set.go:182] Starting replicaset controller
2019-05-26T18:12:31.323238267Z stderr F I0526 18:12:31.323064 1 controller_utils.go:1027] Waiting for caches to sync for ReplicaSet controller
2019-05-26T18:12:31.406313594Z stderr F I0526 18:12:31.406152 1 node_lifecycle_controller.go:77] Sending events to api server
2019-05-26T18:12:31.406509064Z stderr F E0526 18:12:31.406435 1 core.go:161] failed to start cloud node lifecycle controller: no cloud provider provided
2019-05-26T18:12:31.406591018Z stderr F W0526 18:12:31.406552 1 controllermanager.go:489] Skipping "cloud-node-lifecycle"
2019-05-26T18:12:31.634713817Z stderr F I0526 18:12:31.634568 1 controllermanager.go:497] Started "pv-protection"
2019-05-26T18:12:31.634773938Z stderr F I0526 18:12:31.634634 1 pv_protection_controller.go:81] Starting PV protection controller
2019-05-26T18:12:31.634786899Z stderr F I0526 18:12:31.634659 1 controller_utils.go:1027] Waiting for caches to sync for PV protection controller
2019-05-26T18:12:31.884958046Z stderr F I0526 18:12:31.884823 1 controllermanager.go:497] Started "serviceaccount"
2019-05-26T18:12:31.885029462Z stderr F I0526 18:12:31.884891 1 serviceaccounts_controller.go:115] Starting service account controller
2019-05-26T18:12:31.885056524Z stderr F I0526 18:12:31.884915 1 controller_utils.go:1027] Waiting for caches to sync for service account controller
2019-05-26T18:12:32.133608038Z stderr F I0526 18:12:32.133455 1 controllermanager.go:497] Started "disruption"
2019-05-26T18:12:32.133786369Z stderr F I0526 18:12:32.133739 1 disruption.go:286] Starting disruption controller
2019-05-26T18:12:32.133866561Z stderr F I0526 18:12:32.133828 1 controller_utils.go:1027] Waiting for caches to sync for disruption controller
2019-05-26T18:12:32.383456091Z stderr F I0526 18:12:32.383252 1 controllermanager.go:497] Started "tokencleaner"
2019-05-26T18:12:32.383505668Z stderr F I0526 18:12:32.383339 1 tokencleaner.go:116] Starting token cleaner controller
2019-05-26T18:12:32.383540313Z stderr F I0526 18:12:32.383364 1 controller_utils.go:1027] Waiting for caches to sync for token_cleaner controller
2019-05-26T18:12:32.484140835Z stderr F I0526 18:12:32.484000 1 controller_utils.go:1034] Caches are synced for token_cleaner controller
2019-05-26T18:12:32.533111405Z stderr F I0526 18:12:32.532953 1 node_ipam_controller.go:99] Sending events to api server.
2019-05-26T18:12:42.536369446Z stderr F I0526 18:12:42.536204 1 range_allocator.go:78] Sending events to api server.
2019-05-26T18:12:42.537506743Z stderr F I0526 18:12:42.536328 1 range_allocator.go:99] No Service CIDR provided. Skipping filtering out service addresses.
2019-05-26T18:12:42.537523848Z stderr F I0526 18:12:42.536341 1 range_allocator.go:105] Node kind-control-plane has no CIDR, ignoring
2019-05-26T18:12:42.537530389Z stderr F I0526 18:12:42.536366 1 controllermanager.go:497] Started "nodeipam"
2019-05-26T18:12:42.537535091Z stderr F I0526 18:12:42.536521 1 node_ipam_controller.go:167] Starting ipam controller
2019-05-26T18:12:42.537539193Z stderr F I0526 18:12:42.536546 1 controller_utils.go:1027] Waiting for caches to sync for node controller
2019-05-26T18:12:42.555231261Z stderr F I0526 18:12:42.554955 1 controllermanager.go:497] Started "persistentvolume-expander"
2019-05-26T18:12:42.555280341Z stderr F I0526 18:12:42.555143 1 expand_controller.go:153] Starting expand controller
2019-05-26T18:12:42.555304445Z stderr F I0526 18:12:42.555191 1 controller_utils.go:1027] Waiting for caches to sync for expand controller
2019-05-26T18:12:42.574134061Z stderr F I0526 18:12:42.573980 1 controllermanager.go:497] Started "endpoint"
2019-05-26T18:12:42.574239185Z stderr F I0526 18:12:42.574180 1 endpoints_controller.go:166] Starting endpoint controller
2019-05-26T18:12:42.574258195Z stderr F I0526 18:12:42.574209 1 controller_utils.go:1027] Waiting for caches to sync for endpoint controller
2019-05-26T18:12:42.593076259Z stderr F I0526 18:12:42.592898 1 controllermanager.go:497] Started "podgc"
2019-05-26T18:12:42.593229743Z stderr F I0526 18:12:42.592915 1 gc_controller.go:76] Starting GC controller
2019-05-26T18:12:42.593263161Z stderr F I0526 18:12:42.593193 1 controller_utils.go:1027] Waiting for caches to sync for GC controller
2019-05-26T18:12:42.814763642Z stderr F I0526 18:12:42.814570 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
2019-05-26T18:12:42.8232418Z stderr F I0526 18:12:42.822990 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
2019-05-26T18:12:42.823350921Z stderr F I0526 18:12:42.823100 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
2019-05-26T18:12:42.823406514Z stderr F I0526 18:12:42.823171 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps
2019-05-26T18:12:42.823417459Z stderr F I0526 18:12:42.823278 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps
2019-05-26T18:12:42.823422064Z stderr F I0526 18:12:42.823321 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps
2019-05-26T18:12:42.823506199Z stderr F I0526 18:12:42.823417 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
2019-05-26T18:12:42.823522162Z stderr F W0526 18:12:42.823453 1 shared_informer.go:311] resyncPeriod 49462560938970 is smaller than resyncCheckPeriod 66623680222726 and the informer has already started. Changing it to 66623680222726
2019-05-26T18:12:42.82419299Z stderr F I0526 18:12:42.824068 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts
2019-05-26T18:12:42.824220713Z stderr F I0526 18:12:42.824166 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions
2019-05-26T18:12:42.824272744Z stderr F I0526 18:12:42.824220 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
2019-05-26T18:12:42.824320435Z stderr F I0526 18:12:42.824282 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
2019-05-26T18:12:42.824392917Z stderr F I0526 18:12:42.824325 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints
2019-05-26T18:12:42.824444645Z stderr F I0526 18:12:42.824397 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges
2019-05-26T18:12:42.824549506Z stderr F I0526 18:12:42.824462 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.extensions
2019-05-26T18:12:42.824558369Z stderr F I0526 18:12:42.824517 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps
2019-05-26T18:12:42.824651788Z stderr F I0526 18:12:42.824583 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
2019-05-26T18:12:42.824693552Z stderr F I0526 18:12:42.824653 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.extensions
2019-05-26T18:12:42.824827562Z stderr F I0526 18:12:42.824738 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.extensions
2019-05-26T18:12:42.824836813Z stderr F I0526 18:12:42.824768 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io
2019-05-26T18:12:42.824879401Z stderr F I0526 18:12:42.824810 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch
2019-05-26T18:12:42.82489597Z stderr F I0526 18:12:42.824863 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
2019-05-26T18:12:42.825029986Z stderr F W0526 18:12:42.824949 1 shared_informer.go:311] resyncPeriod 62543961500557 is smaller than resyncCheckPeriod 66623680222726 and the informer has already started. Changing it to 66623680222726
2019-05-26T18:12:42.825137408Z stderr F I0526 18:12:42.825079 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps
2019-05-26T18:12:42.825169815Z stderr F I0526 18:12:42.825139 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
2019-05-26T18:12:42.825265543Z stderr F E0526 18:12:42.825210 1 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
2019-05-26T18:12:42.825276929Z stderr F I0526 18:12:42.825227 1 controllermanager.go:497] Started "resourcequota"
2019-05-26T18:12:42.82566475Z stderr F I0526 18:12:42.825593 1 resource_quota_controller.go:276] Starting resource quota controller
2019-05-26T18:12:42.825776077Z stderr F I0526 18:12:42.825733 1 controller_utils.go:1027] Waiting for caches to sync for resource quota controller
2019-05-26T18:12:42.825854057Z stderr F I0526 18:12:42.825814 1 resource_quota_monitor.go:301] QuotaMonitor running
2019-05-26T18:12:42.846258663Z stderr F I0526 18:12:42.846088 1 controllermanager.go:497] Started "statefulset"
2019-05-26T18:12:42.846345837Z stderr F I0526 18:12:42.846274 1 stateful_set.go:151] Starting stateful set controller
2019-05-26T18:12:42.847074802Z stderr F I0526 18:12:42.846965 1 controller_utils.go:1027] Waiting for caches to sync for stateful set controller
2019-05-26T18:12:42.871814667Z stderr F I0526 18:12:42.871600 1 controllermanager.go:497] Started "ttl"
2019-05-26T18:12:42.871861742Z stderr F I0526 18:12:42.871727 1 ttl_controller.go:116] Starting TTL controller
2019-05-26T18:12:42.871879331Z stderr F I0526 18:12:42.871761 1 controller_utils.go:1027] Waiting for caches to sync for TTL controller
2019-05-26T18:12:42.894041498Z stderr F I0526 18:12:42.893829 1 controllermanager.go:497] Started "persistentvolume-binder"
2019-05-26T18:12:42.894099434Z stderr F I0526 18:12:42.893971 1 pv_controller_base.go:270] Starting persistent volume controller
2019-05-26T18:12:42.894105198Z stderr F I0526 18:12:42.894005 1 controller_utils.go:1027] Waiting for caches to sync for persistent volume controller
2019-05-26T18:12:43.039330684Z stderr F I0526 18:12:43.039138 1 controllermanager.go:497] Started "pvc-protection"
2019-05-26T18:12:43.039375252Z stderr F I0526 18:12:43.039210 1 pvc_protection_controller.go:99] Starting PVC protection controller
2019-05-26T18:12:43.03938048Z stderr F I0526 18:12:43.039233 1 controller_utils.go:1027] Waiting for caches to sync for PVC protection controller
2019-05-26T18:12:43.28924353Z stderr F I0526 18:12:43.289038 1 controllermanager.go:497] Started "job"
2019-05-26T18:12:43.289282882Z stderr F I0526 18:12:43.289096 1 job_controller.go:143] Starting job controller
2019-05-26T18:12:43.289288682Z stderr F I0526 18:12:43.289115 1 controller_utils.go:1027] Waiting for caches to sync for job controller
2019-05-26T18:12:43.539738057Z stderr F I0526 18:12:43.539536 1 controllermanager.go:497] Started "csrapproving"
2019-05-26T18:12:43.53979817Z stderr F I0526 18:12:43.539603 1 certificate_controller.go:113] Starting certificate controller
2019-05-26T18:12:43.539804535Z stderr F I0526 18:12:43.539628 1 controller_utils.go:1027] Waiting for caches to sync for certificate controller
2019-05-26T18:12:43.789513296Z stderr F I0526 18:12:43.789219 1 controllermanager.go:497] Started "bootstrapsigner"
2019-05-26T18:12:43.789548727Z stderr F I0526 18:12:43.789324 1 controller_utils.go:1027] Waiting for caches to sync for bootstrap_signer controller
2019-05-26T18:12:43.939251338Z stderr F I0526 18:12:43.939054 1 node_lifecycle_controller.go:292] Sending events to api server.
2019-05-26T18:12:43.939682733Z stderr F I0526 18:12:43.939614 1 node_lifecycle_controller.go:325] Controller is using taint based evictions.
2019-05-26T18:12:43.939788504Z stderr F I0526 18:12:43.939676 1 taint_manager.go:175] Sending events to api server.
2019-05-26T18:12:43.940118921Z stderr F I0526 18:12:43.940040 1 node_lifecycle_controller.go:390] Controller will reconcile labels.
2019-05-26T18:12:43.940172999Z stderr F I0526 18:12:43.940089 1 node_lifecycle_controller.go:403] Controller will taint node by condition.
2019-05-26T18:12:43.940186495Z stderr F I0526 18:12:43.940128 1 controllermanager.go:497] Started "nodelifecycle"
2019-05-26T18:12:43.940251657Z stderr F I0526 18:12:43.940184 1 node_lifecycle_controller.go:427] Starting node controller
2019-05-26T18:12:43.940274791Z stderr F I0526 18:12:43.940214 1 controller_utils.go:1027] Waiting for caches to sync for taint controller
2019-05-26T18:12:44.190006534Z stderr F I0526 18:12:44.189788 1 controllermanager.go:497] Started "clusterrole-aggregation"
2019-05-26T18:12:44.19004112Z stderr F W0526 18:12:44.189829 1 controllermanager.go:489] Skipping "root-ca-cert-publisher"
2019-05-26T18:12:44.190046655Z stderr F I0526 18:12:44.189860 1 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
2019-05-26T18:12:44.190050673Z stderr F I0526 18:12:44.189881 1 controller_utils.go:1027] Waiting for caches to sync for ClusterRoleAggregator controller
2019-05-26T18:12:44.33899693Z stderr F I0526 18:12:44.338800 1 controllermanager.go:497] Started "csrcleaner"
2019-05-26T18:12:44.339176086Z stderr F I0526 18:12:44.338888 1 cleaner.go:81] Starting CSR cleaner controller
2019-05-26T18:12:44.589158467Z stderr F I0526 18:12:44.588954 1 controllermanager.go:497] Started "cronjob"
2019-05-26T18:12:44.589204118Z stderr F I0526 18:12:44.589053 1 cronjob_controller.go:94] Starting CronJob Manager
2019-05-26T18:12:44.739640669Z stderr F E0526 18:12:44.739437 1 prometheus.go:138] failed to register depth metric certificate: duplicate metrics collector registration attempted
2019-05-26T18:12:44.739687031Z stderr F E0526 18:12:44.739473 1 prometheus.go:150] failed to register adds metric certificate: duplicate metrics collector registration attempted
2019-05-26T18:12:44.739692686Z stderr F E0526 18:12:44.739512 1 prometheus.go:162] failed to register latency metric certificate: duplicate metrics collector registration attempted
2019-05-26T18:12:44.740053164Z stderr F E0526 18:12:44.739902 1 prometheus.go:174] failed to register work_duration metric certificate: duplicate metrics collector registration attempted
2019-05-26T18:12:44.740097235Z stderr F E0526 18:12:44.739939 1 prometheus.go:189] failed to register unfinished_work_seconds metric certificate: duplicate metrics collector registration attempted
2019-05-26T18:12:44.740164739Z stderr F E0526 18:12:44.739961 1 prometheus.go:202] failed to register longest_running_processor_microseconds metric certificate: duplicate metrics collector registration attempted
2019-05-26T18:12:44.740187177Z stderr F E0526 18:12:44.740006 1 prometheus.go:214] failed to register retries metric certificate: duplicate metrics collector registration attempted
2019-05-26T18:12:44.740219594Z stderr F I0526 18:12:44.740046 1 controllermanager.go:497] Started "csrsigning"
2019-05-26T18:12:44.740225228Z stderr F I0526 18:12:44.740091 1 certificate_controller.go:113] Starting certificate controller
2019-05-26T18:12:44.740230107Z stderr F I0526 18:12:44.740119 1 controller_utils.go:1027] Waiting for caches to sync for certificate controller
2019-05-26T18:12:44.990173471Z stderr F E0526 18:12:44.989856 1 core.go:77] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
2019-05-26T18:12:44.990258607Z stderr F W0526 18:12:44.989961 1 controllermanager.go:489] Skipping "service"
2019-05-26T18:12:44.990771507Z stderr F I0526 18:12:44.990681 1 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
2019-05-26T18:12:44.996783188Z stderr F E0526 18:12:44.996634 1 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
2019-05-26T18:12:45.011516271Z stderr F W0526 18:12:45.011325 1 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="kind-control-plane" does not exist
2019-05-26T18:12:45.023639538Z stderr F I0526 18:12:45.023422 1 controller_utils.go:1034] Caches are synced for ReplicaSet controller
2019-05-26T18:12:45.032513469Z stderr F I0526 18:12:45.032237 1 controller_utils.go:1034] Caches are synced for HPA controller
2019-05-26T18:12:45.036890824Z stderr F I0526 18:12:45.036726 1 controller_utils.go:1034] Caches are synced for node controller
2019-05-26T18:12:45.036980453Z stderr F I0526 18:12:45.036831 1 range_allocator.go:157] Starting range CIDR allocator
2019-05-26T18:12:45.037014402Z stderr F I0526 18:12:45.036890 1 controller_utils.go:1027] Waiting for caches to sync for cidrallocator controller
2019-05-26T18:12:45.04002316Z stderr F I0526 18:12:45.039887 1 controller_utils.go:1034] Caches are synced for certificate controller
2019-05-26T18:12:45.040631926Z stderr F I0526 18:12:45.040548 1 controller_utils.go:1034] Caches are synced for certificate controller
2019-05-26T18:12:45.058416537Z stderr F I0526 18:12:45.058064 1 log.go:172] [INFO] signed certificate with serial number 196160908415707236328087296299285084736528529152
2019-05-26T18:12:45.072272637Z stderr F I0526 18:12:45.071995 1 controller_utils.go:1034] Caches are synced for TTL controller
2019-05-26T18:12:45.085325662Z stderr F I0526 18:12:45.085099 1 controller_utils.go:1034] Caches are synced for service account controller
2019-05-26T18:12:45.089433361Z stderr F I0526 18:12:45.089255 1 controller_utils.go:1034] Caches are synced for job controller
2019-05-26T18:12:45.089664741Z stderr F I0526 18:12:45.089509 1 controller_utils.go:1034] Caches are synced for bootstrap_signer controller
2019-05-26T18:12:45.096968009Z stderr F I0526 18:12:45.096816 1 controller_utils.go:1034] Caches are synced for deployment controller
2019-05-26T18:12:45.097752641Z stderr F I0526 18:12:45.097647 1 controller_utils.go:1034] Caches are synced for GC controller
2019-05-26T18:12:45.106766325Z stderr F I0526 18:12:45.106653 1 controller_utils.go:1034] Caches are synced for namespace controller
2019-05-26T18:12:45.12931464Z stderr F I0526 18:12:45.129124 1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"d3717a70-7fe1-11e9-83f7-0242ac110003", APIVersion:"apps/v1", ResourceVersion:"197", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-fb8b8dccf to 2
2019-05-26T18:12:45.139800468Z stderr F I0526 18:12:45.139629 1 controller_utils.go:1034] Caches are synced for cidrallocator controller
2019-05-26T18:12:45.156010744Z stderr F I0526 18:12:45.155848 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-fb8b8dccf", UID:"dc3edcb7-7fe1-11e9-83f7-0242ac110003", APIVersion:"apps/v1", ResourceVersion:"337", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-fb8b8dccf-n8zwb
2019-05-26T18:12:45.164342472Z stderr F I0526 18:12:45.164181 1 range_allocator.go:310] Set node kind-control-plane PodCIDR to 10.244.0.0/24
2019-05-26T18:12:45.167831312Z stderr F I0526 18:12:45.167663 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-fb8b8dccf", UID:"dc3edcb7-7fe1-11e9-83f7-0242ac110003", APIVersion:"apps/v1", ResourceVersion:"337", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-fb8b8dccf-92xf6
2019-05-26T18:12:45.376270712Z stderr F I0526 18:12:45.376069 1 controller_utils.go:1034] Caches are synced for daemon sets controller
2019-05-26T18:12:45.390488037Z stderr F I0526 18:12:45.390341 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"d40b1b37-7fe1-11e9-83f7-0242ac110003", APIVersion:"apps/v1", ResourceVersion:"231", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-rkpxr
2019-05-26T18:12:45.392130615Z stderr F I0526 18:12:45.392037 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"d3a4ac2e-7fe1-11e9-83f7-0242ac110003", APIVersion:"apps/v1", ResourceVersion:"213", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-kxszg
2019-05-26T18:12:45.424765804Z stderr F E0526 18:12:45.424568 1 daemon_controller.go:302] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"d40b1b37-7fe1-11e9-83f7-0242ac110003", ResourceVersion:"231", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63694491151, loc:(*time.Location)(0x722ae00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001ce2ba0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001ce2bc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:0.1.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001ce2be0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001ce2c20)}, v1.EnvVar{Name:"CNI_CONFIG_TEMPLATE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001ce2c60)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001caaa50), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001ce6a08), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001c4e8a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"OnDelete", RollingUpdate:(*v1.RollingUpdateDaemonSet)(nil)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001ce6a48)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
2019-05-26T18:12:45.432742246Z stderr F I0526 18:12:45.432588 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"ip-masq-agent", UID:"d41133a9-7fe1-11e9-83f7-0242ac110003", APIVersion:"apps/v1", ResourceVersion:"241", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ip-masq-agent-f9z6l
2019-05-26T18:12:45.434928858Z stderr F I0526 18:12:45.434793 1 controller_utils.go:1034] Caches are synced for PV protection controller
2019-05-26T18:12:45.540687971Z stderr F I0526 18:12:45.540483 1 controller_utils.go:1034] Caches are synced for taint controller
2019-05-26T18:12:45.540728723Z stderr F I0526 18:12:45.540610 1 node_lifecycle_controller.go:1159] Initializing eviction metric for zone:
2019-05-26T18:12:45.540804393Z stderr F W0526 18:12:45.540735 1 node_lifecycle_controller.go:833] Missing timestamp for Node kind-control-plane. Assuming now as a timestamp.
2019-05-26T18:12:45.540876566Z stderr F I0526 18:12:45.540828 1 node_lifecycle_controller.go:1009] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
2019-05-26T18:12:45.541073368Z stderr F I0526 18:12:45.541000 1 taint_manager.go:198] Starting NoExecuteTaintManager
2019-05-26T18:12:45.541418053Z stderr F I0526 18:12:45.541327 1 event.go:209] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-control-plane", UID:"d0d3820e-7fe1-11e9-83f7-0242ac110003", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node kind-control-plane event: Registered Node kind-control-plane in Controller
2019-05-26T18:12:45.634347808Z stderr F I0526 18:12:45.634091 1 controller_utils.go:1034] Caches are synced for disruption controller
2019-05-26T18:12:45.63438545Z stderr F I0526 18:12:45.634161 1 disruption.go:294] Sending events to api server.
2019-05-26T18:12:45.647572764Z stderr F I0526 18:12:45.647176 1 controller_utils.go:1034] Caches are synced for stateful set controller
2019-05-26T18:12:45.651654687Z stderr F I0526 18:12:45.651317 1 controller_utils.go:1034] Caches are synced for ReplicationController controller
2019-05-26T18:12:45.654421826Z stderr F I0526 18:12:45.654278 1 controller_utils.go:1034] Caches are synced for attach detach controller
2019-05-26T18:12:45.655556976Z stderr F I0526 18:12:45.655371 1 controller_utils.go:1034] Caches are synced for expand controller
2019-05-26T18:12:45.694400545Z stderr F I0526 18:12:45.694261 1 controller_utils.go:1034] Caches are synced for persistent volume controller
2019-05-26T18:12:45.739873034Z stderr F I0526 18:12:45.739331 1 controller_utils.go:1034] Caches are synced for PVC protection controller
2019-05-26T18:12:45.778200691Z stderr F I0526 18:12:45.774385 1 controller_utils.go:1034] Caches are synced for endpoint controller
2019-05-26T18:12:45.790961201Z stderr F I0526 18:12:45.790794 1 controller_utils.go:1034] Caches are synced for ClusterRoleAggregator controller
2019-05-26T18:12:45.796303103Z stderr F I0526 18:12:45.795772 1 controller_utils.go:1034] Caches are synced for garbage collector controller
2019-05-26T18:12:45.796483842Z stderr F I0526 18:12:45.796441 1 controller_utils.go:1034] Caches are synced for garbage collector controller
2019-05-26T18:12:45.796566531Z stderr F I0526 18:12:45.796503 1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
2019-05-26T18:12:45.831046075Z stderr F I0526 18:12:45.828186 1 controller_utils.go:1034] Caches are synced for resource quota controller
2019-05-26T18:12:48.365164698Z stderr F I0526 18:12:48.365009 1 log.go:172] [INFO] signed certificate with serial number 641659768397347024919485141189028658976044602495
2019-05-26T18:12:48.688784506Z stderr F I0526 18:12:48.688645 1 log.go:172] [INFO] signed certificate with serial number 554822257710404495653200837014574959243391982643
2019-05-26T18:13:01.02898274Z stderr F W0526 18:13:01.028780 1 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="kind-worker" does not exist
2019-05-26T18:13:01.110063536Z stderr F I0526 18:13:01.109932 1 range_allocator.go:310] Set node kind-worker PodCIDR to 10.244.1.0/24
2019-05-26T18:13:01.119254412Z stderr F I0526 18:13:01.119091 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"d3a4ac2e-7fe1-11e9-83f7-0242ac110003", APIVersion:"apps/v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-btr2x
2019-05-26T18:13:01.151883291Z stderr F I0526 18:13:01.151675 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"d40b1b37-7fe1-11e9-83f7-0242ac110003", APIVersion:"apps/v1", ResourceVersion:"413", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-ntf8l
2019-05-26T18:13:01.165160328Z stderr F I0526 18:13:01.164966 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"ip-masq-agent", UID:"d41133a9-7fe1-11e9-83f7-0242ac110003", APIVersion:"apps/v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ip-masq-agent-bq6xs
2019-05-26T18:13:01.440301303Z stderr F W0526 18:13:01.440107 1 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="kind-worker2" does not exist
2019-05-26T18:13:01.454719088Z stderr F I0526 18:13:01.454542 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"d40b1b37-7fe1-11e9-83f7-0242ac110003", APIVersion:"apps/v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-xcdxm
2019-05-26T18:13:01.461316857Z stderr F I0526 18:13:01.461119 1 range_allocator.go:310] Set node kind-worker2 PodCIDR to 10.244.2.0/24
2019-05-26T18:13:01.465312571Z stderr F I0526 18:13:01.465138 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"ip-masq-agent", UID:"d41133a9-7fe1-11e9-83f7-0242ac110003", APIVersion:"apps/v1", ResourceVersion:"454", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ip-masq-agent-j9898
2019-05-26T18:13:01.497882601Z stderr F I0526 18:13:01.497583 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"d3a4ac2e-7fe1-11e9-83f7-0242ac110003", APIVersion:"apps/v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-q2l5w
2019-05-26T18:13:05.54373846Z stderr F W0526 18:13:05.543594 1 node_lifecycle_controller.go:833] Missing timestamp for Node kind-worker2. Assuming now as a timestamp.
2019-05-26T18:13:05.543923963Z stderr F W0526 18:13:05.543874 1 node_lifecycle_controller.go:833] Missing timestamp for Node kind-worker. Assuming now as a timestamp.
2019-05-26T18:13:05.544551168Z stderr F I0526 18:13:05.544430 1 event.go:209] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"e5bbf075-7fe1-11e9-83f7-0242ac110003", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node kind-worker event: Registered Node kind-worker in Controller
2019-05-26T18:13:05.544733383Z stderr F I0526 18:13:05.544667 1 event.go:209] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker2", UID:"e5fbd9e6-7fe1-11e9-83f7-0242ac110003", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node kind-worker2 event: Registered Node kind-worker2 in Controller
2019-05-26T18:13:06.077784904Z stderr F I0526 18:13:06.074684 1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello", UID:"e8b54961-7fe1-11e9-83f7-0242ac110003", APIVersion:"apps/v1", ResourceVersion:"528", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-6d6586c69c to 1
2019-05-26T18:13:06.090219275Z stderr F I0526 18:13:06.090014 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-6d6586c69c", UID:"e8b93389-7fe1-11e9-83f7-0242ac110003", APIVersion:"apps/v1", ResourceVersion:"531", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-6d6586c69c-z62fd
2019-05-26T18:13:30.545707029Z stderr F I0526 18:13:30.545457 1 node_lifecycle_controller.go:1036] Controller detected that some Nodes are Ready. Exiting master disruption mode.
2019-05-26T18:13:05.551151501Z stderr F W0526 18:13:05.540783 1 server_others.go:267] Flag proxy-mode="" unknown, assuming iptables proxy
2019-05-26T18:13:05.587406293Z stderr F I0526 18:13:05.585832 1 server_others.go:146] Using iptables Proxier.
2019-05-26T18:13:05.587518904Z stderr F I0526 18:13:05.586173 1 server.go:562] Version: v1.14.2
2019-05-26T18:13:05.623931244Z stderr F I0526 18:13:05.623182 1 conntrack.go:52] Setting nf_conntrack_max to 131072
2019-05-26T18:13:05.634930768Z stderr F I0526 18:13:05.623275 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
2019-05-26T18:13:05.634957583Z stderr F I0526 18:13:05.623325 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
2019-05-26T18:13:05.635079138Z stderr F I0526 18:13:05.626696 1 config.go:202] Starting service config controller
2019-05-26T18:13:05.635087681Z stderr F I0526 18:13:05.626726 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
2019-05-26T18:13:05.638948754Z stderr F I0526 18:13:05.626770 1 config.go:102] Starting endpoints config controller
2019-05-26T18:13:05.638980117Z stderr F I0526 18:13:05.635189 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
2019-05-26T18:13:05.727150075Z stderr F I0526 18:13:05.726968 1 controller_utils.go:1034] Caches are synced for service config controller
2019-05-26T18:13:05.735922035Z stderr F I0526 18:13:05.735765 1 controller_utils.go:1034] Caches are synced for endpoints config controller
2019-05-26T18:12:48.154279262Z stderr F W0526 18:12:48.154084 1 server_others.go:267] Flag proxy-mode="" unknown, assuming iptables proxy
2019-05-26T18:12:48.171461362Z stderr F I0526 18:12:48.171295 1 server_others.go:146] Using iptables Proxier.
2019-05-26T18:12:48.171827194Z stderr F I0526 18:12:48.171745 1 server.go:562] Version: v1.14.2
2019-05-26T18:12:48.194479882Z stderr F I0526 18:12:48.194307 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
2019-05-26T18:12:48.194943174Z stderr F I0526 18:12:48.194798 1 conntrack.go:52] Setting nf_conntrack_max to 131072
2019-05-26T18:12:48.195099608Z stderr F I0526 18:12:48.195057 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
2019-05-26T18:12:48.195223921Z stderr F I0526 18:12:48.195189 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
2019-05-26T18:12:48.198432233Z stderr F I0526 18:12:48.198345 1 config.go:202] Starting service config controller
2019-05-26T18:12:48.213656646Z stderr F I0526 18:12:48.213524 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
2019-05-26T18:12:48.213783551Z stderr F I0526 18:12:48.198898 1 config.go:102] Starting endpoints config controller
2019-05-26T18:12:48.213855602Z stderr F I0526 18:12:48.213828 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
2019-05-26T18:12:48.316285543Z stderr F I0526 18:12:48.316134 1 controller_utils.go:1034] Caches are synced for service config controller
2019-05-26T18:12:48.319123882Z stderr F I0526 18:12:48.319041 1 controller_utils.go:1034] Caches are synced for endpoints config controller
2019-05-26T18:13:06.257903594Z stderr F W0526 18:13:06.257613 1 server_others.go:267] Flag proxy-mode="" unknown, assuming iptables proxy
2019-05-26T18:13:06.272150405Z stderr F I0526 18:13:06.271992 1 server_others.go:146] Using iptables Proxier.
2019-05-26T18:13:06.272738709Z stderr F I0526 18:13:06.272643 1 server.go:562] Version: v1.14.2
2019-05-26T18:13:06.285630893Z stderr F I0526 18:13:06.285492 1 conntrack.go:52] Setting nf_conntrack_max to 131072
2019-05-26T18:13:06.288233985Z stderr F I0526 18:13:06.288120 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
2019-05-26T18:13:06.288467929Z stderr F I0526 18:13:06.288403 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
2019-05-26T18:13:06.295468252Z stderr F I0526 18:13:06.294776 1 config.go:202] Starting service config controller
2019-05-26T18:13:06.295502666Z stderr F I0526 18:13:06.294844 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
2019-05-26T18:13:06.295520023Z stderr F I0526 18:13:06.294882 1 config.go:102] Starting endpoints config controller
2019-05-26T18:13:06.295525696Z stderr F I0526 18:13:06.294896 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
2019-05-26T18:13:06.398983432Z stderr F I0526 18:13:06.398801 1 controller_utils.go:1034] Caches are synced for endpoints config controller
2019-05-26T18:13:06.399083968Z stderr F I0526 18:13:06.398801 1 controller_utils.go:1034] Caches are synced for service config controller
2019-05-26T18:12:18.633282871Z stderr F I0526 18:12:18.602712 1 serving.go:319] Generated self-signed cert in-memory
2019-05-26T18:12:19.846928087Z stderr F W0526 18:12:19.819638 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
2019-05-26T18:12:19.8469581Z stderr F W0526 18:12:19.819666 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
2019-05-26T18:12:19.846966256Z stderr F W0526 18:12:19.819682 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
2019-05-26T18:12:19.846972941Z stderr F I0526 18:12:19.823464 1 server.go:142] Version: v1.14.2
2019-05-26T18:12:19.846979156Z stderr F I0526 18:12:19.823569 1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
2019-05-26T18:12:19.846985182Z stderr F W0526 18:12:19.837439 1 authorization.go:47] Authorization is disabled
2019-05-26T18:12:19.846990162Z stderr F W0526 18:12:19.837462 1 authentication.go:55] Authentication is disabled
2019-05-26T18:12:19.846994626Z stderr F I0526 18:12:19.837474 1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
2019-05-26T18:12:19.846998911Z stderr F I0526 18:12:19.838123 1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
2019-05-26T18:12:19.876272175Z stderr F E0526 18:12:19.846775 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: Get https://172.17.0.3:6443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 172.17.0.3:6443: connect: connection refused
2019-05-26T18:12:19.876298966Z stderr F E0526 18:12:19.846897 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: Get https://172.17.0.3:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 172.17.0.3:6443: connect: connection refused
2019-05-26T18:12:19.87630544Z stderr F E0526 18:12:19.846975 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://172.17.0.3:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.17.0.3:6443: connect: connection refused
2019-05-26T18:12:19.876311641Z stderr F E0526 18:12:19.847084 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: Get https://172.17.0.3:6443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 172.17.0.3:6443: connect: connection refused
2019-05-26T18:12:19.876316032Z stderr F E0526 18:12:19.847164 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: Get https://172.17.0.3:6443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 172.17.0.3:6443: connect: connection refused
2019-05-26T18:12:19.876322228Z stderr F E0526 18:12:19.847241 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: Get https://172.17.0.3:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 172.17.0.3:6443: connect: connection refused
2019-05-26T18:12:19.87632782Z stderr F E0526 18:12:19.847368 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: Get https://172.17.0.3:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 172.17.0.3:6443: connect: connection refused
2019-05-26T18:12:19.876331788Z stderr F E0526 18:12:19.847442 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: Get https://172.17.0.3:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 172.17.0.3:6443: connect: connection refused
2019-05-26T18:12:19.876348833Z stderr F E0526 18:12:19.847504 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: Get https://172.17.0.3:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 172.17.0.3:6443: connect: connection refused
2019-05-26T18:12:19.876352882Z stderr F E0526 18:12:19.847566 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: Get https://172.17.0.3:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 172.17.0.3:6443: connect: connection refused
2019-05-26T18:12:20.849623591Z stderr F E0526 18:12:20.849497 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: Get https://172.17.0.3:6443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 172.17.0.3:6443: connect: connection refused
2019-05-26T18:12:20.852981288Z stderr F E0526 18:12:20.852797 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://172.17.0.3:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.17.0.3:6443: connect: connection refused
2019-05-26T18:12:20.853033004Z stderr F E0526 18:12:20.852896 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: Get https://172.17.0.3:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 172.17.0.3:6443: connect: connection refused
2019-05-26T18:12:20.860832418Z stderr F E0526 18:12:20.856577 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: Get https://172.17.0.3:6443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 172.17.0.3:6443: connect: connection refused
2019-05-26T18:12:20.860882489Z stderr F E0526 18:12:20.856588 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: Get https://172.17.0.3:6443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 172.17.0.3:6443: connect: connection refused
2019-05-26T18:12:20.860890737Z stderr F E0526 18:12:20.856931 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: Get https://172.17.0.3:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 172.17.0.3:6443: connect: connection refused
2019-05-26T18:12:20.86089606Z stderr F E0526 18:12:20.857625 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: Get https://172.17.0.3:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 172.17.0.3:6443: connect: connection refused
2019-05-26T18:12:20.860900628Z stderr F E0526 18:12:20.858536 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: Get https://172.17.0.3:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 172.17.0.3:6443: connect: connection refused
2019-05-26T18:12:20.860905576Z stderr F E0526 18:12:20.859548 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: Get https://172.17.0.3:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 172.17.0.3:6443: connect: connection refused
2019-05-26T18:12:20.860909913Z stderr F E0526 18:12:20.860657 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: Get https://172.17.0.3:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 172.17.0.3:6443: connect: connection refused
2019-05-26T18:12:25.864607512Z stderr F E0526 18:12:25.864448 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
2019-05-26T18:12:25.874284256Z stderr F E0526 18:12:25.874149 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
2019-05-26T18:12:25.876644651Z stderr F E0526 18:12:25.876561 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
2019-05-26T18:12:25.876870909Z stderr F E0526 18:12:25.876831 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
2019-05-26T18:12:25.883378547Z stderr F E0526 18:12:25.883263 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
2019-05-26T18:12:25.883512462Z stderr F E0526 18:12:25.883460 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
2019-05-26T18:12:25.883591884Z stderr F E0526 18:12:25.883557 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
2019-05-26T18:12:25.883656522Z stderr F E0526 18:12:25.883586 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
2019-05-26T18:12:25.883730792Z stderr F E0526 18:12:25.883646 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
2019-05-26T18:12:25.886482309Z stderr F E0526 18:12:25.886355 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
2019-05-26T18:12:26.868320243Z stderr F E0526 18:12:26.868150 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
2019-05-26T18:12:26.876373068Z stderr F E0526 18:12:26.876198 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
2019-05-26T18:12:26.879037026Z stderr F E0526 18:12:26.878907 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
2019-05-26T18:12:26.879942467Z stderr F E0526 18:12:26.879868 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
2019-05-26T18:12:26.884782217Z stderr F E0526 18:12:26.884711 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
2019-05-26T18:12:26.887558914Z stderr F E0526 18:12:26.887418 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
2019-05-26T18:12:26.891303059Z stderr F E0526 18:12:26.891205 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
2019-05-26T18:12:26.896100456Z stderr F E0526 18:12:26.896003 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
2019-05-26T18:12:26.900082239Z stderr F E0526 18:12:26.899989 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
2019-05-26T18:12:26.904311728Z stderr F E0526 18:12:26.904211 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
2019-05-26T18:12:28.740075263Z stderr F I0526 18:12:28.739905 1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
2019-05-26T18:12:28.840239242Z stderr F I0526 18:12:28.840070 1 controller_utils.go:1034] Caches are synced for scheduler controller
2019-05-26T18:12:28.840339286Z stderr F I0526 18:12:28.840194 1 leaderelection.go:217] attempting to acquire leader lease kube-system/kube-scheduler...
2019-05-26T18:12:28.847622136Z stderr F I0526 18:12:28.847497 1 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler
-- Logs begin at Sun 2019-05-26 18:11:56 UTC, end at Sun 2019-05-26 18:14:02 UTC. --
May 26 18:11:56 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent.
May 26 18:11:57 kind-worker kubelet[45]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 26 18:11:57 kind-worker kubelet[45]: F0526 18:11:57.031783 45 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
May 26 18:11:57 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
May 26 18:11:57 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'.
May 26 18:12:07 kind-worker systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart.
May 26 18:12:07 kind-worker systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
May 26 18:12:07 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
May 26 18:12:07 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent.
May 26 18:12:07 kind-worker kubelet[64]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 26 18:12:07 kind-worker kubelet[64]: F0526 18:12:07.241852 64 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
May 26 18:12:07 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
May 26 18:12:07 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'.
May 26 18:12:17 kind-worker systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart.
May 26 18:12:17 kind-worker systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
May 26 18:12:17 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
May 26 18:12:17 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent.
May 26 18:12:17 kind-worker kubelet[72]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 26 18:12:17 kind-worker kubelet[72]: F0526 18:12:17.529057 72 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
May 26 18:12:17 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
May 26 18:12:17 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'.
May 26 18:12:27 kind-worker systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart.
May 26 18:12:27 kind-worker systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
May 26 18:12:27 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
May 26 18:12:27 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent.
May 26 18:12:27 kind-worker kubelet[80]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 26 18:12:27 kind-worker kubelet[80]: F0526 18:12:27.761741 80 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
May 26 18:12:27 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
May 26 18:12:27 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'.
May 26 18:12:37 kind-worker systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart.
May 26 18:12:37 kind-worker systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4.
May 26 18:12:37 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
May 26 18:12:37 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent.
May 26 18:12:37 kind-worker kubelet[117]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 26 18:12:37 kind-worker kubelet[117]: F0526 18:12:37.983346 117 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
May 26 18:12:37 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
May 26 18:12:37 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'.
May 26 18:12:47 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
May 26 18:12:47 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent.
May 26 18:12:47 kind-worker kubelet[150]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 26 18:12:47 kind-worker kubelet[150]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.165119 150 server.go:417] Version: v1.14.2
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.165455 150 plugins.go:103] No cloud provider specified.
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.165480 150 server.go:754] Client rotation is on, will bootstrap in background
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.193996 150 server.go:625] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.197314 150 container_manager_linux.go:261] container manager verified user specified cgroup-root exists: []
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.197347 150 container_manager_linux.go:266] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms}
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.197425 150 container_manager_linux.go:286] Creating device plugin manager: true
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.197541 150 state_mem.go:36] [cpumanager] initializing new in-memory state store
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.207280 150 kubelet.go:279] Adding pod path: /etc/kubernetes/manifests
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.207342 150 kubelet.go:304] Watching apiserver
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.220319 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.220539 150 file.go:98] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:12:48 kind-worker kubelet[150]: W0526 18:12:48.230857 150 util_unix.go:77] Using "/run/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/containerd/containerd.sock".
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.231108 150 remote_runtime.go:62] parsed scheme: ""
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.231172 150 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
May 26 18:12:48 kind-worker kubelet[150]: W0526 18:12:48.231256 150 util_unix.go:77] Using "/run/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/containerd/containerd.sock".
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.231408 150 remote_image.go:50] parsed scheme: ""
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.231495 150 remote_image.go:50] scheme "" not registered, fallback to default scheme
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.231370 150 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{/run/containerd/containerd.sock 0 <nil>}]
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.231807 150 clientconn.go:796] ClientConn switching balancer to "pick_first"
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.231926 150 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000457710, CONNECTING
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.232438 150 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000457710, READY
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.232611 150 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{/run/containerd/containerd.sock 0 <nil>}]
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.232679 150 clientconn.go:796] ClientConn switching balancer to "pick_first"
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.241328 150 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc00035e1e0, CONNECTING
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.241921 150 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc00035e1e0, READY
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.243960 150 kuberuntime_manager.go:210] Container runtime containerd initialized, version: 1.2.6-0ubuntu1, apiVersion: v1alpha2
May 26 18:12:48 kind-worker kubelet[150]: W0526 18:12:48.244365 150 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.248926 150 server.go:1037] Started kubelet
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.250949 150 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.251068 150 status_manager.go:152] Starting to sync pod status with apiserver
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.251139 150 kubelet.go:1806] Starting kubelet main sync loop.
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.251217 150 kubelet.go:1823] skipping pod synchronization - [container runtime status check may not have completed yet., PLEG is not healthy: pleg has yet to be successful.]
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.251370 150 server.go:141] Starting to listen on 0.0.0.0:10250
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.252218 150 server.go:343] Adding debug handlers to kubelet server.
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.253657 150 volume_manager.go:248] Starting Kubelet Volume Manager
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.257857 150 desired_state_of_world_populator.go:130] Desired state populator starts to run
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.276100 150 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.284519 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.284948 150 clientconn.go:440] parsed scheme: "unix"
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.285055 150 clientconn.go:440] scheme "unix" not registered, fallback to default scheme
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.285169 150 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{unix:///run/containerd/containerd.sock 0 <nil>}]
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.285250 150 clientconn.go:796] ClientConn switching balancer to "pick_first"
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.285367 150 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0009f3670, CONNECTING
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.285827 150 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0009f3670, READY
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.291417 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.295245 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.295629 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.338049 150 controller.go:115] failed to ensure node lease exists, will retry in 200ms, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.340506 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b25d511c3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a80ed5d1c3, ext:951327693, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a80ed5d1c3, ext:951327693, loc:(*time.Location)(0x7ff4900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.366075 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.368834 150 kubelet.go:1823] skipping pod synchronization - container runtime status check may not have completed yet.
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.372316 150 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.372842 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.381342 150 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.386920 150 kubelet_node_status.go:72] Attempting to register node kind-worker
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.388551 150 cpu_manager.go:155] [cpumanager] starting with none policy
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.388683 150 cpu_manager.go:156] [cpumanager] reconciling every 10s
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.388743 150 policy_none.go:42] [cpumanager] none policy: Start
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.397668 150 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope
May 26 18:12:48 kind-worker kubelet[150]: W0526 18:12:48.424572 150 manager.go:538] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.431735 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e5999", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f1999, ext:1089299342, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f1999, ext:1089299342, loc:(*time.Location)(0x7ff4900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.451882 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e78b7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f38b7, ext:1089307315, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f38b7, ext:1089307315, loc:(*time.Location)(0x7ff4900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.454609 150 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "kind-worker" not found
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.459171 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e89b0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f49b0, ext:1089311665, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f49b0, ext:1089311665, loc:(*time.Location)(0x7ff4900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.464287 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e5999", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f1999, ext:1089299342, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a81727ecc9, ext:1090926289, loc:(*time.Location)(0x7ff4900)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e5999" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.465452 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e78b7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f38b7, ext:1089307315, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8172805c3, ext:1090932679, loc:(*time.Location)(0x7ff4900)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e78b7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.466508 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e89b0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f49b0, ext:1089311665, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a817281732, ext:1090937147, loc:(*time.Location)(0x7ff4900)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e89b0" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.467409 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b30d34d89", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a819d40d89, ext:1135761297, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a819d40d89, ext:1135761297, loc:(*time.Location)(0x7ff4900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.472967 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.551815 150 controller.go:115] failed to ensure node lease exists, will retry in 400ms, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.573261 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.598105 150 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
May 26 18:12:48 kind-worker kubelet[150]: I0526 18:12:48.599638 150 kubelet_node_status.go:72] Attempting to register node kind-worker
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.601261 150 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.601212 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e5999", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f1999, ext:1089299342, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a823bccd49, ext:1302009681, loc:(*time.Location)(0x7ff4900)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e5999" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.602358 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e78b7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f38b7, ext:1089307315, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a823bcf7ac, ext:1302020527, loc:(*time.Location)(0x7ff4900)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e78b7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.603492 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e89b0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f49b0, ext:1089311665, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a823bd6854, ext:1302049369, loc:(*time.Location)(0x7ff4900)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e89b0" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.673626 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.773955 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.874130 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.953393 150 controller.go:115] failed to ensure node lease exists, will retry in 800ms, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
May 26 18:12:48 kind-worker kubelet[150]: E0526 18:12:48.974341 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:49 kind-worker kubelet[150]: I0526 18:12:49.001612 150 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
May 26 18:12:49 kind-worker kubelet[150]: I0526 18:12:49.004439 150 kubelet_node_status.go:72] Attempting to register node kind-worker
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.005887 150 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.006263 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e5999", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f1999, ext:1089299342, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a840432f48, ext:1706836801, loc:(*time.Location)(0x7ff4900)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e5999" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.007354 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e78b7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f38b7, ext:1089307315, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a840435a72, ext:1706847856, loc:(*time.Location)(0x7ff4900)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e78b7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.051138 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e89b0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f49b0, ext:1089311665, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a840436e60, ext:1706852951, loc:(*time.Location)(0x7ff4900)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e89b0" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.074535 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.174714 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.220710 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.274915 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.286063 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.292748 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.296761 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.300024 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.369084 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.375079 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.475271 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.575459 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.675654 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.754919 150 controller.go:115] failed to ensure node lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.775860 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:49 kind-worker kubelet[150]: I0526 18:12:49.806291 150 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
May 26 18:12:49 kind-worker kubelet[150]: I0526 18:12:49.807708 150 kubelet_node_status.go:72] Attempting to register node kind-worker
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.809204 150 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.809281 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e5999", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f1999, ext:1089299342, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a87023e37a, ext:2510092152, loc:(*time.Location)(0x7ff4900)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e5999" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.810137 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e78b7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f38b7, ext:1089307315, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8702416ed, ext:2510105327, loc:(*time.Location)(0x7ff4900)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e78b7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.811073 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e89b0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f49b0, ext:1089311665, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8702432f6, ext:2510112502, loc:(*time.Location)(0x7ff4900)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e89b0" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.876039 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:49 kind-worker kubelet[150]: E0526 18:12:49.976237 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.076407 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.176669 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.220978 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.276896 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.287716 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.294146 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.297886 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.301151 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.370613 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.377087 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.478309 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.578569 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.678717 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.779191 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.879390 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:50 kind-worker kubelet[150]: E0526 18:12:50.979539 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.079737 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.179939 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.221239 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.280601 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.289165 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.295414 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.298891 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.302311 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.356252 150 controller.go:115] failed to ensure node lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.371757 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.381030 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:51 kind-worker kubelet[150]: I0526 18:12:51.409385 150 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
May 26 18:12:51 kind-worker kubelet[150]: I0526 18:12:51.410633 150 kubelet_node_status.go:72] Attempting to register node kind-worker
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.411952 150 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.412139 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e5999", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f1999, ext:1089299342, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8d8790400, ext:4113017851, loc:(*time.Location)(0x7ff4900)}}, Count:6, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e5999" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.412889 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e78b7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f38b7, ext:1089307315, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8d8793b58, ext:4113032023, loc:(*time.Location)(0x7ff4900)}}, Count:6, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e78b7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.413571 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e89b0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f49b0, ext:1089311665, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8d8795399, ext:4113038225, loc:(*time.Location)(0x7ff4900)}}, Count:6, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e89b0" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.481187 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.581348 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.681467 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.781644 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.881807 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:51 kind-worker kubelet[150]: E0526 18:12:51.982174 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.082323 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.182487 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.221544 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.282634 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.290646 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.296815 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.300144 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.303624 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.372892 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.383138 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.483327 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.583525 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.683647 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.783828 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.883993 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:52 kind-worker kubelet[150]: E0526 18:12:52.984173 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.084357 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.184542 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.221773 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.284704 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.292248 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.298266 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.301192 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.304601 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.374338 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.385101 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.433941 150 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.485281 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.585445 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.685608 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.787227 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.887466 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:53 kind-worker kubelet[150]: E0526 18:12:53.987653 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.087848 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.188074 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.222241 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.288262 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.293930 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.299931 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.302706 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.305679 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.375847 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.388454 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.488601 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.557563 150 controller.go:115] failed to ensure node lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.589372 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:54 kind-worker kubelet[150]: I0526 18:12:54.612211 150 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
May 26 18:12:54 kind-worker kubelet[150]: I0526 18:12:54.613353 150 kubelet_node_status.go:72] Attempting to register node kind-worker
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.614751 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e5999", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f1999, ext:1089299342, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a9a48e614f, ext:7315744593, loc:(*time.Location)(0x7ff4900)}}, Count:7, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e5999" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.615273 150 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.615752 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e89b0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f49b0, ext:1089311665, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a9a48eac88, ext:7315763845, loc:(*time.Location)(0x7ff4900)}}, Count:7, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e89b0" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.616480 150 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a24e2b2e0e78b7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a8170f38b7, ext:1089307315, loc:(*time.Location)(0x7ff4900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf32d3a9a48e98ba, ext:7315758776, loc:(*time.Location)(0x7ff4900)}}, Count:7, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a24e2b2e0e78b7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.689579 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.789753 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.889942 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:54 kind-worker kubelet[150]: E0526 18:12:54.990119 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.090527 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.190717 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.222471 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.290975 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.295594 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.301195 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.303780 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.306788 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.391163 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.421917 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.491370 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.592585 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.692756 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.792926 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.893112 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:55 kind-worker kubelet[150]: E0526 18:12:55.993337 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.093527 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.193740 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.222688 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.294099 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.297095 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.302623 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.304680 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.307982 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.394329 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.423549 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.494727 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.594932 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.695114 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.795321 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.897144 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:56 kind-worker kubelet[150]: E0526 18:12:56.997337 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.097546 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.197723 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.222960 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.297846 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.298669 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.303906 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.305687 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.308884 150 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.398066 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.425260 150 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.498296 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.598563 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.698767 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.798993 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.899256 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:57 kind-worker kubelet[150]: E0526 18:12:57.999463 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:58 kind-worker kubelet[150]: E0526 18:12:58.099695 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:58 kind-worker kubelet[150]: I0526 18:12:58.169927 150 transport.go:132] certificate rotation detected, shutting down client connections to start using new credentials
May 26 18:12:58 kind-worker kubelet[150]: E0526 18:12:58.199982 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:58 kind-worker kubelet[150]: E0526 18:12:58.223189 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:12:58 kind-worker kubelet[150]: E0526 18:12:58.302899 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:58 kind-worker kubelet[150]: I0526 18:12:58.392492 150 reconciler.go:154] Reconciler: start to sync state
May 26 18:12:58 kind-worker kubelet[150]: E0526 18:12:58.403091 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:58 kind-worker kubelet[150]: E0526 18:12:58.435180 150 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
May 26 18:12:58 kind-worker kubelet[150]: E0526 18:12:58.455279 150 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "kind-worker" not found
May 26 18:12:58 kind-worker kubelet[150]: E0526 18:12:58.503293 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:58 kind-worker kubelet[150]: E0526 18:12:58.603477 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:58 kind-worker kubelet[150]: E0526 18:12:58.703659 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:58 kind-worker kubelet[150]: E0526 18:12:58.804100 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:58 kind-worker kubelet[150]: E0526 18:12:58.904344 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:59 kind-worker kubelet[150]: E0526 18:12:59.004743 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:59 kind-worker kubelet[150]: E0526 18:12:59.104908 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:59 kind-worker kubelet[150]: E0526 18:12:59.205134 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:59 kind-worker kubelet[150]: E0526 18:12:59.223410 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:12:59 kind-worker kubelet[150]: E0526 18:12:59.305377 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:59 kind-worker kubelet[150]: E0526 18:12:59.405569 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:59 kind-worker kubelet[150]: E0526 18:12:59.505742 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:59 kind-worker kubelet[150]: E0526 18:12:59.606057 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:59 kind-worker kubelet[150]: E0526 18:12:59.706257 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:59 kind-worker kubelet[150]: E0526 18:12:59.806545 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:12:59 kind-worker kubelet[150]: E0526 18:12:59.906753 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:13:00 kind-worker kubelet[150]: E0526 18:13:00.007213 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:13:00 kind-worker kubelet[150]: E0526 18:13:00.107425 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:13:00 kind-worker kubelet[150]: E0526 18:13:00.209175 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:13:00 kind-worker kubelet[150]: E0526 18:13:00.223626 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:00 kind-worker kubelet[150]: E0526 18:13:00.309366 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:13:00 kind-worker kubelet[150]: E0526 18:13:00.409596 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:13:00 kind-worker kubelet[150]: E0526 18:13:00.509821 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:13:00 kind-worker kubelet[150]: E0526 18:13:00.610047 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:13:00 kind-worker kubelet[150]: E0526 18:13:00.710270 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:13:00 kind-worker kubelet[150]: E0526 18:13:00.810539 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:13:00 kind-worker kubelet[150]: E0526 18:13:00.910774 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:13:00 kind-worker kubelet[150]: E0526 18:13:00.961977 150 controller.go:194] failed to get node "kind-worker" when trying to set owner ref to the node lease: nodes "kind-worker" not found
May 26 18:13:01 kind-worker kubelet[150]: E0526 18:13:01.011041 150 kubelet.go:2244] node "kind-worker" not found
May 26 18:13:01 kind-worker kubelet[150]: I0526 18:13:01.015486 150 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
May 26 18:13:01 kind-worker kubelet[150]: I0526 18:13:01.016580 150 kubelet_node_status.go:72] Attempting to register node kind-worker
May 26 18:13:01 kind-worker kubelet[150]: I0526 18:13:01.029230 150 kubelet_node_status.go:75] Successfully registered node kind-worker
May 26 18:13:01 kind-worker kubelet[150]: I0526 18:13:01.111239 150 kuberuntime_manager.go:946] updating runtime config through cri with podcidr 10.244.1.0/24
May 26 18:13:01 kind-worker kubelet[150]: I0526 18:13:01.111899 150 kubelet_network.go:77] Setting Pod CIDR: -> 10.244.1.0/24
May 26 18:13:01 kind-worker kubelet[150]: E0526 18:13:01.112376 150 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
May 26 18:13:01 kind-worker kubelet[150]: E0526 18:13:01.224043 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:01 kind-worker kubelet[150]: I0526 18:13:01.299996 150 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-qsjml" (UniqueName: "kubernetes.io/secret/e5bf0d1f-7fe1-11e9-83f7-0242ac110003-kindnet-token-qsjml") pod "kindnet-ntf8l" (UID: "e5bf0d1f-7fe1-11e9-83f7-0242ac110003")
May 26 18:13:01 kind-worker kubelet[150]: I0526 18:13:01.300053 150 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/e5be4844-7fe1-11e9-83f7-0242ac110003-kube-proxy") pod "kube-proxy-btr2x" (UID: "e5be4844-7fe1-11e9-83f7-0242ac110003")
May 26 18:13:01 kind-worker kubelet[150]: I0526 18:13:01.300086 150 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/e5be4844-7fe1-11e9-83f7-0242ac110003-xtables-lock") pod "kube-proxy-btr2x" (UID: "e5be4844-7fe1-11e9-83f7-0242ac110003")
May 26 18:13:01 kind-worker kubelet[150]: I0526 18:13:01.300113 150 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/e5be4844-7fe1-11e9-83f7-0242ac110003-lib-modules") pod "kube-proxy-btr2x" (UID: "e5be4844-7fe1-11e9-83f7-0242ac110003")
May 26 18:13:01 kind-worker kubelet[150]: I0526 18:13:01.300141 150 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-tqsn6" (UniqueName: "kubernetes.io/secret/e5be4844-7fe1-11e9-83f7-0242ac110003-kube-proxy-token-tqsn6") pod "kube-proxy-btr2x" (UID: "e5be4844-7fe1-11e9-83f7-0242ac110003")
May 26 18:13:01 kind-worker kubelet[150]: I0526 18:13:01.300171 150 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config" (UniqueName: "kubernetes.io/configmap/e5d062ca-7fe1-11e9-83f7-0242ac110003-config") pod "ip-masq-agent-bq6xs" (UID: "e5d062ca-7fe1-11e9-83f7-0242ac110003")
May 26 18:13:01 kind-worker kubelet[150]: I0526 18:13:01.300199 150 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ip-masq-agent-token-pqnvx" (UniqueName: "kubernetes.io/secret/e5d062ca-7fe1-11e9-83f7-0242ac110003-ip-masq-agent-token-pqnvx") pod "ip-masq-agent-bq6xs" (UID: "e5d062ca-7fe1-11e9-83f7-0242ac110003")
May 26 18:13:01 kind-worker kubelet[150]: I0526 18:13:01.300225 150 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/e5bf0d1f-7fe1-11e9-83f7-0242ac110003-cni-cfg") pod "kindnet-ntf8l" (UID: "e5bf0d1f-7fe1-11e9-83f7-0242ac110003")
May 26 18:13:02 kind-worker kubelet[150]: E0526 18:13:02.224345 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:03 kind-worker kubelet[150]: E0526 18:13:03.224549 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:03 kind-worker kubelet[150]: E0526 18:13:03.594795 150 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
May 26 18:13:04 kind-worker kubelet[150]: E0526 18:13:04.224705 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:05 kind-worker kubelet[150]: E0526 18:13:05.224931 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:06 kind-worker kubelet[150]: E0526 18:13:06.226473 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:07 kind-worker kubelet[150]: E0526 18:13:07.226693 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:08 kind-worker kubelet[150]: E0526 18:13:08.207516 150 file.go:104] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:08 kind-worker kubelet[150]: E0526 18:13:08.226953 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:08 kind-worker kubelet[150]: E0526 18:13:08.473934 150 summary_sys_containers.go:47] Failed to get system container stats for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": failed to get cgroup stats for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": failed to get container info for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": unknown container "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service"
May 26 18:13:08 kind-worker kubelet[150]: E0526 18:13:08.596388 150 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
May 26 18:13:09 kind-worker kubelet[150]: E0526 18:13:09.227257 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:10 kind-worker kubelet[150]: E0526 18:13:10.227500 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:11 kind-worker kubelet[150]: E0526 18:13:11.227757 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:12 kind-worker kubelet[150]: E0526 18:13:12.228012 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:13 kind-worker kubelet[150]: E0526 18:13:13.228920 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:13 kind-worker kubelet[150]: E0526 18:13:13.597448 150 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
May 26 18:13:14 kind-worker kubelet[150]: E0526 18:13:14.229209 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:15 kind-worker kubelet[150]: E0526 18:13:15.229429 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:16 kind-worker kubelet[150]: E0526 18:13:16.229718 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:17 kind-worker kubelet[150]: E0526 18:13:17.230042 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:18 kind-worker kubelet[150]: E0526 18:13:18.230280 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:18 kind-worker kubelet[150]: E0526 18:13:18.495285 150 summary_sys_containers.go:47] Failed to get system container stats for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": failed to get cgroup stats for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": failed to get container info for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": unknown container "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service"
May 26 18:13:18 kind-worker kubelet[150]: E0526 18:13:18.599086 150 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
May 26 18:13:19 kind-worker kubelet[150]: E0526 18:13:19.230573 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:20 kind-worker kubelet[150]: E0526 18:13:20.230925 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:21 kind-worker kubelet[150]: E0526 18:13:21.231201 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:22 kind-worker kubelet[150]: E0526 18:13:22.231518 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:23 kind-worker kubelet[150]: E0526 18:13:23.231811 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:23 kind-worker kubelet[150]: E0526 18:13:23.600418 150 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
May 26 18:13:24 kind-worker kubelet[150]: E0526 18:13:24.232126 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:25 kind-worker kubelet[150]: E0526 18:13:25.232392 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:26 kind-worker kubelet[150]: E0526 18:13:26.232660 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:27 kind-worker kubelet[150]: E0526 18:13:27.233448 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:28 kind-worker kubelet[150]: E0526 18:13:28.207605 150 file.go:104] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:28 kind-worker kubelet[150]: E0526 18:13:28.233732 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:28 kind-worker kubelet[150]: E0526 18:13:28.539162 150 summary_sys_containers.go:47] Failed to get system container stats for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": failed to get cgroup stats for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": failed to get container info for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": unknown container "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service"
May 26 18:13:28 kind-worker kubelet[150]: E0526 18:13:28.601984 150 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
May 26 18:13:29 kind-worker kubelet[150]: E0526 18:13:29.233993 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:30 kind-worker kubelet[150]: E0526 18:13:30.234274 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:31 kind-worker kubelet[150]: E0526 18:13:31.234578 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:32 kind-worker kubelet[150]: E0526 18:13:32.235404 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:33 kind-worker kubelet[150]: E0526 18:13:33.235657 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:33 kind-worker kubelet[150]: E0526 18:13:33.603240 150 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
May 26 18:13:34 kind-worker kubelet[150]: E0526 18:13:34.235910 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:35 kind-worker kubelet[150]: E0526 18:13:35.236146 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:36 kind-worker kubelet[150]: E0526 18:13:36.236798 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:37 kind-worker kubelet[150]: E0526 18:13:37.237147 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:38 kind-worker kubelet[150]: E0526 18:13:38.237501 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:38 kind-worker kubelet[150]: E0526 18:13:38.569334 150 summary_sys_containers.go:47] Failed to get system container stats for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": failed to get cgroup stats for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": failed to get container info for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": unknown container "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service"
May 26 18:13:39 kind-worker kubelet[150]: E0526 18:13:39.237768 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:40 kind-worker kubelet[150]: E0526 18:13:40.238053 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:41 kind-worker kubelet[150]: E0526 18:13:41.238295 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:42 kind-worker kubelet[150]: E0526 18:13:42.238682 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:43 kind-worker kubelet[150]: E0526 18:13:43.238971 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:44 kind-worker kubelet[150]: E0526 18:13:44.239922 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:45 kind-worker kubelet[150]: E0526 18:13:45.240165 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:46 kind-worker kubelet[150]: E0526 18:13:46.240373 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:47 kind-worker kubelet[150]: E0526 18:13:47.240551 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:48 kind-worker kubelet[150]: E0526 18:13:48.207556 150 file.go:104] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:48 kind-worker kubelet[150]: E0526 18:13:48.240919 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:48 kind-worker kubelet[150]: E0526 18:13:48.593765 150 summary_sys_containers.go:47] Failed to get system container stats for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": failed to get cgroup stats for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": failed to get container info for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": unknown container "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service"
May 26 18:13:48 kind-worker kubelet[150]: E0526 18:13:48.901095 150 reflector.go:126] object-"default"/"default-token-n7dzs": Failed to list *v1.Secret: secrets "default-token-n7dzs" is forbidden: User "system:node:kind-worker" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node "kind-worker" and this object
May 26 18:13:49 kind-worker kubelet[150]: I0526 18:13:49.062881 150 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-n7dzs" (UniqueName: "kubernetes.io/secret/e8bf84f9-7fe1-11e9-83f7-0242ac110003-default-token-n7dzs") pod "hello-6d6586c69c-z62fd" (UID: "e8bf84f9-7fe1-11e9-83f7-0242ac110003")
May 26 18:13:49 kind-worker kubelet[150]: E0526 18:13:49.241163 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:50 kind-worker kubelet[150]: E0526 18:13:50.241427 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:51 kind-worker kubelet[150]: E0526 18:13:51.241747 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:52 kind-worker kubelet[150]: E0526 18:13:52.241996 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:53 kind-worker kubelet[150]: E0526 18:13:53.242252 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:54 kind-worker kubelet[150]: E0526 18:13:54.252148 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:55 kind-worker kubelet[150]: E0526 18:13:55.253062 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:56 kind-worker kubelet[150]: E0526 18:13:56.253773 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:57 kind-worker kubelet[150]: E0526 18:13:57.254455 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:58 kind-worker kubelet[150]: E0526 18:13:58.254719 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:13:58 kind-worker kubelet[150]: E0526 18:13:58.620330 150 summary_sys_containers.go:47] Failed to get system container stats for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": failed to get cgroup stats for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": failed to get container info for "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service": unknown container "/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/docker/537c9c66feb532d4605301ccdab476bfa58f0c52141c31655cc86132a79ef473/system.slice/kubelet.service"
May 26 18:13:59 kind-worker kubelet[150]: E0526 18:13:59.254943 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:14:00 kind-worker kubelet[150]: E0526 18:14:00.255165 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:14:01 kind-worker kubelet[150]: E0526 18:14:01.256199 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 26 18:14:01 kind-worker kubelet[150]: E0526 18:14:01.875034 150 upgradeaware.go:370] Error proxying data from client to backend: readfrom tcp 127.0.0.1:42634->127.0.0.1:45391: write tcp 127.0.0.1:42634->127.0.0.1:45391: write: broken pipe
May 26 18:14:02 kind-worker kubelet[150]: E0526 18:14:02.260644 150 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
Initializing machine ID from random generator.
Inserted module 'autofs4'
systemd 240 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid)
Detected virtualization docker.
Detected architecture x86-64.
Failed to create symlink /sys/fs/cgroup/cpuacct: File exists
Failed to create symlink /sys/fs/cgroup/cpu: File exists
Failed to create symlink /sys/fs/cgroup/net_prio: File exists
Failed to create symlink /sys/fs/cgroup/net_cls: File exists
Welcome to Ubuntu Disco Dingo (development branch)!
Set hostname to <kind-worker>.
File /lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
Configuration file /etc/systemd/system/containerd.service.d/10-restart.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ OK ] Reached target Local File Systems.
[ OK ] Set up automount Arbitrary…s File System Automount Point.
[ OK ] Listening on Journal Socket (/dev/log).
[ OK ] Listening on Journal Socket.
Mounting Kernel Debug File System...
Mounting FUSE Control File System...
Starting Create list of re…odes for the current kernel...
[ OK ] Reached target Slices.
[ OK ] Started Dispatch Password …ts to Console Directory Watch.
[ OK ] Reached target Paths.
Starting Apply Kernel Variables...
[ OK ] Reached target Swap.
[ OK ] Listening on Journal Audit Socket.
[ OK ] Reached target Sockets.
Starting Journal Service...
Starting Create System Users...
[ OK ] Reached target Local Encrypted Volumes.
Mounting Huge Pages File System...
Starting Update UTMP about System Boot/Shutdown...
[ OK ] Started Create list of req… nodes for the current kernel.
[ OK ] Mounted Huge Pages File System.
[ OK ] Mounted FUSE Control File System.
[ OK ] Started Apply Kernel Variables.
[ OK ] Mounted Kernel Debug File System.
[ OK ] Started Create System Users.
Starting Create Static Device Nodes in /dev...
[ OK ] Started Update UTMP about System Boot/Shutdown.
[ OK ] Started Journal Service.
Starting Flush Journal to Persistent Storage...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment