Skip to content

Instantly share code, notes, and snippets.

@xoco70
Last active October 24, 2019 15:36
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save xoco70/8a9c7042238400e370796cb23cb11c88 to your computer and use it in GitHub Desktop.
Save xoco70/8a9c7042238400e370796cb23cb11c88 to your computer and use it in GitHub Desktop.
minikube logs
➜ ~ minikube logs
==> Docker <==
-- Logs begin at Wed 2019-06-19 15:34:35 CEST, end at Thu 2019-10-24 17:36:01 CEST. --
oct. 24 15:12:01 ubuntu dockerd[1374]: time="2019-10-24T15:12:01.167614453+02:00" level=warning msg="failed to prune image docker.io/library/docker@sha256:5d11c2a1300b7438e21cbb5b2bd704e663b70534abe90c9072081dfb18e144de: No such image: docker@sha256:5d11c2a1300b7438e21cbb5b2bd704e663b70534abe90c9072081dfb18e144de"
oct. 24 15:12:03 ubuntu dockerd[1374]: time="2019-10-24T15:12:03.811309007+02:00" level=warning msg="failed to prune image registry.gitlab.com/sunchain/enedis_api@sha256:9d874e03c8ff40cb72518b946d892589b9fc04ac4c220e39fe6d2687bcadce46: No such image: registry.gitlab.com/sunchain/enedis_api@sha256:9d874e03c8ff40cb72518b946d892589b9fc04ac4c220e39fe6d2687bcadce46"
oct. 24 15:12:05 ubuntu dockerd[1374]: time="2019-10-24T15:12:05.708851998+02:00" level=warning msg="failed to prune image k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7: No such image: k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7"
oct. 24 15:12:05 ubuntu dockerd[1374]: time="2019-10-24T15:12:05.768640790+02:00" level=warning msg="failed to prune image docker.io/library/influxdb@sha256:af07db2e2040b27d4ae4c57b51729503802d22c1632893fe63ea054afe632ecc: No such image: influxdb@sha256:af07db2e2040b27d4ae4c57b51729503802d22c1632893fe63ea054afe632ecc"
oct. 24 15:12:17 ubuntu dockerd[1374]: time="2019-10-24T15:12:17.578525358+02:00" level=warning msg="failed to prune image docker.io/library/mongo-express@sha256:0a628412cec39dc072c102bec354005ffdda6190ac9b3bc05ae1c10690163866: No such image: mongo-express@sha256:0a628412cec39dc072c102bec354005ffdda6190ac9b3bc05ae1c10690163866"
oct. 24 15:12:20 ubuntu dockerd[1374]: time="2019-10-24T15:12:20.173138680+02:00" level=warning msg="failed to prune image k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5: No such image: k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5"
oct. 24 15:12:23 ubuntu dockerd[1374]: time="2019-10-24T15:12:23.335808040+02:00" level=warning msg="failed to prune image k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa: No such image: k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa"
oct. 24 15:12:26 ubuntu dockerd[1374]: time="2019-10-24T15:12:26.379253727+02:00" level=warning msg="failed to prune image docker.io/hyperledger/fabric-tools@sha256:a4e8a104b8cd83729cc4f706c330b86581e7953819c26376d6cdc155db05ce20: No such image: hyperledger/fabric-tools@sha256:a4e8a104b8cd83729cc4f706c330b86581e7953819c26376d6cdc155db05ce20"
oct. 24 15:12:27 ubuntu dockerd[1374]: time="2019-10-24T15:12:27.103497240+02:00" level=warning msg="failed to prune image docker.io/dpage/pgadmin4@sha256:ed935172d395e900bbd80e5a9edd1bc76b7fe9e1bacc1c04b10a3b77bb2155f7: No such image: dpage/pgadmin4@sha256:ed935172d395e900bbd80e5a9edd1bc76b7fe9e1bacc1c04b10a3b77bb2155f7"
oct. 24 15:12:27 ubuntu dockerd[1374]: time="2019-10-24T15:12:27.374700202+02:00" level=warning msg="failed to prune image k8s.gcr.io/kube-scheduler@sha256:8fd3c3251f07234a234469e201900e4274726f1fe0d5dc6fb7da911f1c851a1a: No such image: k8s.gcr.io/kube-scheduler@sha256:8fd3c3251f07234a234469e201900e4274726f1fe0d5dc6fb7da911f1c851a1a"
oct. 24 15:12:33 ubuntu dockerd[1374]: time="2019-10-24T15:12:33.972722395+02:00" level=warning msg="failed to prune image registry.gitlab.com/sunchain/enedis_api@sha256:2ff6187fe1f8128a05a2e2df1b3a47c8af07f6f26e9870f75a140f140ff10f29: No such image: registry.gitlab.com/sunchain/enedis_api@sha256:2ff6187fe1f8128a05a2e2df1b3a47c8af07f6f26e9870f75a140f140ff10f29"
oct. 24 15:12:58 ubuntu dockerd[1374]: time="2019-10-24T15:12:58.314298037+02:00" level=warning msg="failed to prune image docker.io/library/postgres@sha256:1518027f4aaee49b836c5cf4ece1b4a16bdcd820af873402e19e1cc181c1aff2: No such image: postgres@sha256:1518027f4aaee49b836c5cf4ece1b4a16bdcd820af873402e19e1cc181c1aff2"
oct. 24 15:12:58 ubuntu dockerd[1374]: time="2019-10-24T15:12:58.938169702+02:00" level=warning msg="failed to prune image docker.io/hyperledger/fabric-baseos@sha256:a1281185d8e624930b634dfdb0fc3f63b369db79154d054d9da61abbc39c1dde: No such image: hyperledger/fabric-baseos@sha256:a1281185d8e624930b634dfdb0fc3f63b369db79154d054d9da61abbc39c1dde"
oct. 24 15:13:02 ubuntu dockerd[1374]: time="2019-10-24T15:13:02.275375555+02:00" level=warning msg="failed to prune image docker.io/hyperledger/cello-operator-dashboard@sha256:8fb3c48cfba8bf607f52afff2a3a8b8812ebb335afdde167c99f40ae3faeb843: No such image: hyperledger/cello-operator-dashboard@sha256:8fb3c48cfba8bf607f52afff2a3a8b8812ebb335afdde167c99f40ae3faeb843"
oct. 24 15:13:05 ubuntu dockerd[1374]: time="2019-10-24T15:13:05.033674014+02:00" level=warning msg="failed to prune image k8s.gcr.io/kube-controller-manager@sha256:7d3fc48cf83aa0a7b8f129fa4255bb5530908e1a5b194be269ea8329b48e9598: No such image: k8s.gcr.io/kube-controller-manager@sha256:7d3fc48cf83aa0a7b8f129fa4255bb5530908e1a5b194be269ea8329b48e9598"
oct. 24 15:13:32 ubuntu dockerd[1374]: time="2019-10-24T15:13:32.179724467+02:00" level=warning msg="failed to prune image docker.io/library/postgres@sha256:f310592cf3964f038dbaefac2dc2088982e5ab06312a590bcacc97749ee5db69: No such image: postgres@sha256:f310592cf3964f038dbaefac2dc2088982e5ab06312a590bcacc97749ee5db69"
oct. 24 15:13:38 ubuntu dockerd[1374]: time="2019-10-24T15:13:38.087374431+02:00" level=warning msg="failed to prune image docker.io/hyperledger/fabric-ccenv@sha256:b972b21efad90db2751b448020cb385f01224190923a877f29aa1611979c96a7: No such image: hyperledger/fabric-ccenv@sha256:b972b21efad90db2751b448020cb385f01224190923a877f29aa1611979c96a7"
oct. 24 15:13:41 ubuntu dockerd[1374]: time="2019-10-24T15:13:41.866493274+02:00" level=warning msg="failed to prune image docker.io/library/alpine@sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10: No such image: alpine@sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10"
oct. 24 15:13:48 ubuntu dockerd[1374]: time="2019-10-24T15:13:48.077050470+02:00" level=warning msg="failed to prune image docker.io/grafana/grafana@sha256:94322b292d32ebe1fad7128f1112cafb8087dddb8fb8609c550782586816e0c8: No such image: grafana/grafana@sha256:94322b292d32ebe1fad7128f1112cafb8087dddb8fb8609c550782586816e0c8"
oct. 24 15:13:53 ubuntu dockerd[1374]: time="2019-10-24T15:13:53.500247294+02:00" level=warning msg="failed to prune image docker.io/library/node@sha256:aa28f3b6b4087b3f289bebaca8d3fb82b93137ae739aa67df3a04892d521958e: No such image: node@sha256:aa28f3b6b4087b3f289bebaca8d3fb82b93137ae739aa67df3a04892d521958e"
oct. 24 15:13:53 ubuntu dockerd[1374]: time="2019-10-24T15:13:53.614170599+02:00" level=warning msg="failed to prune image k8s.gcr.io/kube-controller-manager@sha256:c156a05ee9d40e3ca2ebf9337f38a10558c1fc6c9124006f128a82e6c38cdf3e: No such image: k8s.gcr.io/kube-controller-manager@sha256:c156a05ee9d40e3ca2ebf9337f38a10558c1fc6c9124006f128a82e6c38cdf3e"
oct. 24 15:13:54 ubuntu dockerd[1374]: time="2019-10-24T15:13:54.921281290+02:00" level=warning msg="failed to prune image docker.io/library/node@sha256:b6a7ef79a92fb17616a798934d1a5e84583d1125d756469504cd0e8a30c6b0a6: No such image: node@sha256:b6a7ef79a92fb17616a798934d1a5e84583d1125d756469504cd0e8a30c6b0a6"
oct. 24 15:13:57 ubuntu dockerd[1374]: time="2019-10-24T15:13:57.082483724+02:00" level=warning msg="failed to prune image docker.io/hyperledger/cello-baseimage@sha256:cafbfafdd59b6b596f57ec42154b6a7c26a653fd21da6440ab157f3bf957185e: No such image: hyperledger/cello-baseimage@sha256:cafbfafdd59b6b596f57ec42154b6a7c26a653fd21da6440ab157f3bf957185e"
oct. 24 15:14:02 ubuntu dockerd[1374]: time="2019-10-24T15:14:02.717178384+02:00" level=warning msg="failed to prune image k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4: No such image: k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4"
oct. 24 15:14:05 ubuntu dockerd[1374]: time="2019-10-24T15:14:05.062839565+02:00" level=warning msg="failed to prune image registry.gitlab.com/sunchain/espace_client/api@sha256:96563df34df52a14dd3d25ca00a7e5dfbe0dfc6c4e9d92ffdca6653223bb35f8: No such image: registry.gitlab.com/sunchain/espace_client/api@sha256:96563df34df52a14dd3d25ca00a7e5dfbe0dfc6c4e9d92ffdca6653223bb35f8"
oct. 24 15:14:05 ubuntu dockerd[1374]: time="2019-10-24T15:14:05.682528143+02:00" level=warning msg="failed to prune image k8s.gcr.io/kube-addon-manager@sha256:672794ee3582521eb8bc4f257d0f70c92893f1989f39a200f9c84bcfe1aea7c9: No such image: k8s.gcr.io/kube-addon-manager@sha256:672794ee3582521eb8bc4f257d0f70c92893f1989f39a200f9c84bcfe1aea7c9"
oct. 24 15:14:13 ubuntu dockerd[1374]: time="2019-10-24T15:14:13.065598657+02:00" level=warning msg="failed to prune image k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747: No such image: k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747"
oct. 24 15:14:18 ubuntu dockerd[1374]: time="2019-10-24T15:14:18.359194948+02:00" level=warning msg="failed to prune image docker.io/grafana/grafana@sha256:a48dbcdd80f74465d98dbea58911fc450bc76cf2c89a5046b0551f7d260fe88a: No such image: grafana/grafana@sha256:a48dbcdd80f74465d98dbea58911fc450bc76cf2c89a5046b0551f7d260fe88a"
oct. 24 15:14:19 ubuntu dockerd[1374]: time="2019-10-24T15:14:19.049099862+02:00" level=warning msg="failed to prune image k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0: No such image: k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0"
oct. 24 15:14:23 ubuntu dockerd[1374]: time="2019-10-24T15:14:23.623529349+02:00" level=warning msg="failed to prune image docker.io/library/node@sha256:1d4a8dbe3817d65b5915de8c5df1c6b223986514b286275490cb15d55438e8b6: No such image: node@sha256:1d4a8dbe3817d65b5915de8c5df1c6b223986514b286275490cb15d55438e8b6"
oct. 24 15:14:37 ubuntu dockerd[1374]: time="2019-10-24T15:14:37.742174933+02:00" level=warning msg="failed to prune image docker.io/library/influxdb@sha256:f0b7acde2d7fa215576a9f83abbf363b6f5641896535a01dbaf62299ab2272f9: No such image: influxdb@sha256:f0b7acde2d7fa215576a9f83abbf363b6f5641896535a01dbaf62299ab2272f9"
oct. 24 15:37:08 ubuntu dockerd[1374]: time="2019-10-24T15:37:08.186421341+02:00" level=warning msg="Your kernel does not support swap limit capabilities,or the cgroup is not mounted. Memory limited without swap."
oct. 24 15:37:08 ubuntu dockerd[1374]: time="2019-10-24T15:37:08.465901687+02:00" level=warning msg="Your kernel does not support swap limit capabilities,or the cgroup is not mounted. Memory limited without swap."
oct. 24 17:25:49 ubuntu dockerd[1374]: time="2019-10-24T17:25:49.386412972+02:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
oct. 24 17:25:49 ubuntu dockerd[1374]: time="2019-10-24T17:25:49.583187174+02:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
oct. 24 17:25:49 ubuntu dockerd[1374]: time="2019-10-24T17:25:49.771165247+02:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
oct. 24 17:25:50 ubuntu dockerd[1374]: time="2019-10-24T17:25:50.236656732+02:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
oct. 24 17:25:50 ubuntu dockerd[1374]: time="2019-10-24T17:25:50.612344475+02:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
oct. 24 17:25:50 ubuntu dockerd[1374]: time="2019-10-24T17:25:50.811324345+02:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
oct. 24 17:25:51 ubuntu dockerd[1374]: time="2019-10-24T17:25:51.016172615+02:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
oct. 24 17:25:51 ubuntu dockerd[1374]: time="2019-10-24T17:25:51.198435038+02:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
oct. 24 17:25:51 ubuntu dockerd[1374]: time="2019-10-24T17:25:51.441810324+02:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
oct. 24 17:25:51 ubuntu dockerd[1374]: time="2019-10-24T17:25:51.718266104+02:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
oct. 24 17:25:51 ubuntu dockerd[1374]: time="2019-10-24T17:25:51.925931739+02:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
oct. 24 17:25:52 ubuntu dockerd[1374]: time="2019-10-24T17:25:52.143836432+02:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
oct. 24 17:25:52 ubuntu dockerd[1374]: time="2019-10-24T17:25:52.343512999+02:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
oct. 24 17:25:52 ubuntu dockerd[1374]: time="2019-10-24T17:25:52.536142649+02:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
oct. 24 17:25:52 ubuntu dockerd[1374]: time="2019-10-24T17:25:52.753449660+02:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
oct. 24 17:25:53 ubuntu dockerd[1374]: time="2019-10-24T17:25:53.004698558+02:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
oct. 24 17:25:53 ubuntu dockerd[1374]: time="2019-10-24T17:25:53.243200971+02:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
oct. 24 17:25:53 ubuntu dockerd[1374]: time="2019-10-24T17:25:53.527311645+02:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
oct. 24 17:28:15 ubuntu dockerd[1374]: time="2019-10-24T17:28:15.156407284+02:00" level=warning msg="failed to prune image k8s.gcr.io/kube-controller-manager@sha256:c156a05ee9d40e3ca2ebf9337f38a10558c1fc6c9124006f128a82e6c38cdf3e: No such image: k8s.gcr.io/kube-controller-manager@sha256:c156a05ee9d40e3ca2ebf9337f38a10558c1fc6c9124006f128a82e6c38cdf3e"
oct. 24 17:28:15 ubuntu dockerd[1374]: time="2019-10-24T17:28:15.184430594+02:00" level=warning msg="failed to prune image k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5: No such image: k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5"
oct. 24 17:28:15 ubuntu dockerd[1374]: time="2019-10-24T17:28:15.223861696+02:00" level=warning msg="failed to prune image k8s.gcr.io/kube-proxy@sha256:e7f0f8e320cfeeaafdc9c0cb8e23f51e542fa1d955ae39c8131a0531ba72c794: No such image: k8s.gcr.io/kube-proxy@sha256:e7f0f8e320cfeeaafdc9c0cb8e23f51e542fa1d955ae39c8131a0531ba72c794"
oct. 24 17:28:15 ubuntu dockerd[1374]: time="2019-10-24T17:28:15.248900193+02:00" level=warning msg="failed to prune image k8s.gcr.io/kube-apiserver@sha256:f4168527c91289da2708f62ae729fdde5fb484167dd05ffbb7ab666f60de96cd: No such image: k8s.gcr.io/kube-apiserver@sha256:f4168527c91289da2708f62ae729fdde5fb484167dd05ffbb7ab666f60de96cd"
oct. 24 17:28:15 ubuntu dockerd[1374]: time="2019-10-24T17:28:15.272310188+02:00" level=warning msg="failed to prune image k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea: No such image: k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea"
oct. 24 17:28:15 ubuntu dockerd[1374]: time="2019-10-24T17:28:15.346439827+02:00" level=warning msg="failed to prune image k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa: No such image: k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa"
oct. 24 17:28:15 ubuntu dockerd[1374]: time="2019-10-24T17:28:15.449943297+02:00" level=warning msg="failed to prune image k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0: No such image: k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0"
oct. 24 17:31:08 ubuntu dockerd[1374]: time="2019-10-24T17:31:08.700894808+02:00" level=warning msg="Your kernel does not support swap limit capabilities,or the cgroup is not mounted. Memory limited without swap."
oct. 24 17:31:08 ubuntu dockerd[1374]: time="2019-10-24T17:31:08.801573173+02:00" level=warning msg="Your kernel does not support swap limit capabilities,or the cgroup is not mounted. Memory limited without swap."
==> container status <==
sudo: crictl: command not found
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4536af08e34c 4689081edb10 "/storage-provisioner" 4 minutes ago Up 4 minutes k8s_storage-provisioner_storage-provisioner_kube-system_c53ea473-aa83-4b9f-bf59-e5e24a84da0f_0
0b6720cac1cc k8s.gcr.io/pause:3.1 "/pause" 4 minutes ago Up 4 minutes k8s_POD_storage-provisioner_kube-system_c53ea473-aa83-4b9f-bf59-e5e24a84da0f_0
b35ea6b210bd bf261d157914 "/coredns -conf /etc…" 4 minutes ago Up 4 minutes k8s_coredns_coredns-5644d7b6d9-6wbrz_kube-system_ee930b0f-9326-4c80-b884-ebea4e634d55_0
724660d5cd94 bf261d157914 "/coredns -conf /etc…" 4 minutes ago Up 4 minutes k8s_coredns_coredns-5644d7b6d9-p42s6_kube-system_ad7582f7-6b8e-4e88-a796-81328bc108b6_0
0e9ae3fdb2b3 c21b0c7400f9 "/usr/local/bin/kube…" 4 minutes ago Up 4 minutes k8s_kube-proxy_kube-proxy-lnrml_kube-system_067a55a5-3fab-4150-bcf0-c8a85e197be8_0
c37d2e567d0f k8s.gcr.io/pause:3.1 "/pause" 4 minutes ago Up 4 minutes k8s_POD_coredns-5644d7b6d9-6wbrz_kube-system_ee930b0f-9326-4c80-b884-ebea4e634d55_0
12b7edbc741f k8s.gcr.io/pause:3.1 "/pause" 4 minutes ago Up 4 minutes k8s_POD_kube-proxy-lnrml_kube-system_067a55a5-3fab-4150-bcf0-c8a85e197be8_0
2240c304b6b6 k8s.gcr.io/pause:3.1 "/pause" 4 minutes ago Up 4 minutes k8s_POD_coredns-5644d7b6d9-p42s6_kube-system_ad7582f7-6b8e-4e88-a796-81328bc108b6_0
8e0240c317b3 bd12a212f9dc "/opt/kube-addons.sh" 5 minutes ago Up 5 minutes k8s_kube-addon-manager_kube-addon-manager-minikube_kube-system_c3e29047da86ce6690916750ab69c40b_0
8912a2cbd966 06a629a7e51c "kube-controller-man…" 5 minutes ago Up 5 minutes k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_e640cf9855d72a2348a56dd1f7180a84_0
7a0b4118f99e 301ddc62b80b "kube-scheduler --au…" 5 minutes ago Up 5 minutes k8s_kube-scheduler_kube-scheduler-minikube_kube-system_c18ee741ac4ad7b2bfda7d88116f3047_0
37883eac7382 b305571ca60a "kube-apiserver --ad…" 5 minutes ago Up 5 minutes k8s_kube-apiserver_kube-apiserver-minikube_kube-system_abd295a0c91d72e794cb79e2f57ff5ff_0
079beff2eaf8 b2756210eeab "etcd --advertise-cl…" 5 minutes ago Up 5 minutes k8s_etcd_etcd-minikube_kube-system_fbccf1575bc9fd9e99a4958ba4cb0bd4_0
43d826e50642 k8s.gcr.io/pause:3.1 "/pause" 5 minutes ago Up 5 minutes k8s_POD_etcd-minikube_kube-system_fbccf1575bc9fd9e99a4958ba4cb0bd4_0
66d63a32aa2c k8s.gcr.io/pause:3.1 "/pause" 5 minutes ago Up 5 minutes k8s_POD_kube-scheduler-minikube_kube-system_c18ee741ac4ad7b2bfda7d88116f3047_0
ae7458f54fc0 k8s.gcr.io/pause:3.1 "/pause" 5 minutes ago Up 5 minutes k8s_POD_kube-controller-manager-minikube_kube-system_e640cf9855d72a2348a56dd1f7180a84_0
98391ae0a2c6 k8s.gcr.io/pause:3.1 "/pause" 5 minutes ago Up 5 minutes k8s_POD_kube-apiserver-minikube_kube-system_abd295a0c91d72e794cb79e2f57ff5ff_0
654c18b89f6c k8s.gcr.io/pause:3.1 "/pause" 5 minutes ago Up 5 minutes k8s_POD_kube-addon-manager-minikube_kube-system_c3e29047da86ce6690916750ab69c40b_0
==> coredns [724660d5cd94] <==
.:53
2019-10-24T15:31:09.635Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
2019-10-24T15:31:09.635Z [INFO] CoreDNS-1.6.2
2019-10-24T15:31:09.636Z [INFO] linux/amd64, go1.12.8, 795a3eb
CoreDNS-1.6.2
linux/amd64, go1.12.8, 795a3eb
==> coredns [b35ea6b210bd] <==
.:53
2019-10-24T15:31:09.632Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
2019-10-24T15:31:09.632Z [INFO] CoreDNS-1.6.2
2019-10-24T15:31:09.632Z [INFO] linux/amd64, go1.12.8, 795a3eb
CoreDNS-1.6.2
linux/amd64, go1.12.8, 795a3eb
==> dmesg <==
[oct.24 12:23] [Firmware Bug]: TSC ADJUST: CPU0: -32263713 force to 0
[ +0,000000] [Firmware Bug]: TSC ADJUST differs within socket(s), fixing all errors
[ +0,113873] #2
[ +0,000112] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
[ +0,000013] #3
[ +0,004332] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
[ +0,305949] Initramfs unpacking failed: Decoding failed
[ +0,049749] tpm_crb MSFT0101:00: [Firmware Bug]: ACPI region does not cover the entire command/response buffer. [mem 0xfed40000-0xfed4087f flags 0x201] vs fed40080 f80
[ +0,000013] tpm_crb MSFT0101:00: [Firmware Bug]: ACPI region does not cover the entire command/response buffer. [mem 0xfed40000-0xfed4087f flags 0x201] vs fed40080 f80
[ +0,065561] usb: port power management may be unreliable
[ +0,000455] i8042: PNP: PS/2 appears to have AUX port disabled, if this is incorrect please boot with i8042.nopnp
[ +0,001984] platform eisa.0: EISA: Cannot allocate resource for mainboard
[ +0,000001] platform eisa.0: Cannot allocate resource for EISA slot 1
[ +0,000001] platform eisa.0: Cannot allocate resource for EISA slot 2
[ +0,000001] platform eisa.0: Cannot allocate resource for EISA slot 3
[ +0,000000] platform eisa.0: Cannot allocate resource for EISA slot 4
[ +0,000001] platform eisa.0: Cannot allocate resource for EISA slot 5
[ +0,000001] platform eisa.0: Cannot allocate resource for EISA slot 6
[ +0,000001] platform eisa.0: Cannot allocate resource for EISA slot 7
[ +0,000001] platform eisa.0: Cannot allocate resource for EISA slot 8
[ +0,363529] i2c_hid i2c-ELAN1300:00: i2c-ELAN1300:00 supply vdd not found, using dummy regulator
[ +0,000010] i2c_hid i2c-ELAN1300:00: i2c-ELAN1300:00 supply vddl not found, using dummy regulator
[oct.24 12:24] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
[ +0,120914] systemd[1]: File /lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[ +0,000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[ +0,064432] systemd[1]: Configuration file /etc/systemd/system/openvpnas.service is marked executable. Please remove executable permission bits. Proceeding anyway.
[ +1,077555] uvcvideo 1-5:1.0: Entity type for entity Realtek Extended Controls Unit was not initialized!
[ +0,000002] uvcvideo 1-5:1.0: Entity type for entity Extension 4 was not initialized!
[ +0,000002] uvcvideo 1-5:1.0: Entity type for entity Processing 2 was not initialized!
[ +0,000001] uvcvideo 1-5:1.0: Entity type for entity Camera 1 was not initialized!
[ +0,081265] thermal thermal_zone8: failed to read out thermal zone (-61)
[ +3,557720] vboxdrv: loading out-of-tree module taints kernel.
[ +0,067276] VBoxNetFlt: Successfully started.
[ +0,015438] VBoxNetAdp: Successfully started.
[ +0,043602] VBoxPciLinuxInit
[oct.24 12:25] kauditd_printk_skb: 56 callbacks suppressed
[oct.24 15:05] sr 1:0:0:0: Power-on or device reset occurred
[ +6,137200] usb 1-1-port2: Cannot enable. Maybe the USB cable is bad?
[ +0,584891] usb 1-1.2: device descriptor read/64, error -71
[ +0,615999] usb 1-1.2: device descriptor read/64, error -71
[ +1,298417] usb 1-1-port2: Cannot enable. Maybe the USB cable is bad?
[ +0,331692] usb 1-1.2: can't set config #1, error -71
[ +1,099937] usb 1-1-port2: Cannot enable. Maybe the USB cable is bad?
[oct.24 15:06] usb 1-1-port2: Cannot enable. Maybe the USB cable is bad?
[ +2,384084] sr 1:0:0:0: Power-on or device reset occurred
[oct.24 15:07] usb 1-1-port2: Cannot enable. Maybe the USB cable is bad?
[ +0,180704] usb 1-1.2: device descriptor read/all, error -71
[ +1,174028] usb 1-1-port2: Cannot enable. Maybe the USB cable is bad?
[ +0,161943] usb 1-1.2: Device not responding to setup address.
[ +0,207971] usb 1-1.2: Device not responding to setup address.
[ +0,207884] usb 1-1.2: device not accepting address 20, error -71
[ +0,001525] usb 1-1-port2: unable to enumerate USB device
[ +1,572901] sr 1:0:0:0: Power-on or device reset occurred
[ +0,810121] blk_update_request: I/O error, dev sr0, sector 0 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
[ +0,000184] usb 1-1.2: usbfs: process 6672 (events) did not claim interface 0 before use
[ +0,000403] sr 1:0:0:0: Power-on or device reset occurred
==> kernel <==
17:36:02 up 5:12, 1 user, load average: 2,69, 2,28, 1,87
Linux ubuntu 5.3.0-19-generic #20-Ubuntu SMP Fri Oct 18 09:04:39 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 19.10"
==> kube-addon-manager [8e0240c317b3] <==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-10-24T15:35:18+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-10-24T15:35:23+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-10-24T15:35:24+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-10-24T15:35:27+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-10-24T15:35:29+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-10-24T15:35:32+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-10-24T15:35:34+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-10-24T15:35:37+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-10-24T15:35:39+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-10-24T15:35:42+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-10-24T15:35:43+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-10-24T15:35:48+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-10-24T15:35:49+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-10-24T15:35:53+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-10-24T15:35:54+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-10-24T15:35:57+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-10-24T15:35:59+00:00 ==
==> kube-apiserver [37883eac7382] <==
I1024 15:30:54.456186 1 client.go:361] parsed scheme: "endpoint"
I1024 15:30:54.456422 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]
I1024 15:30:54.482458 1 client.go:361] parsed scheme: "endpoint"
I1024 15:30:54.482520 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]
I1024 15:30:54.488786 1 client.go:361] parsed scheme: "endpoint"
I1024 15:30:54.488818 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]
W1024 15:30:54.605025 1 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
W1024 15:30:54.620660 1 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W1024 15:30:54.637562 1 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1024 15:30:54.640555 1 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1024 15:30:54.650310 1 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1024 15:30:54.679334 1 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W1024 15:30:54.679374 1 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I1024 15:30:54.698540 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I1024 15:30:54.698558 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I1024 15:30:54.700080 1 client.go:361] parsed scheme: "endpoint"
I1024 15:30:54.700099 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]
I1024 15:30:54.706438 1 client.go:361] parsed scheme: "endpoint"
I1024 15:30:54.706472 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]
I1024 15:30:54.811117 1 client.go:361] parsed scheme: "endpoint"
I1024 15:30:54.811149 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}]
I1024 15:30:56.425281 1 secure_serving.go:123] Serving securely on [::]:8443
I1024 15:30:56.425320 1 available_controller.go:383] Starting AvailableConditionController
I1024 15:30:56.425341 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I1024 15:30:56.425496 1 controller.go:81] Starting OpenAPI AggregationController
I1024 15:30:56.425613 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I1024 15:30:56.425632 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1024 15:30:56.426235 1 crd_finalizer.go:274] Starting CRDFinalizer
I1024 15:30:56.426731 1 autoregister_controller.go:140] Starting autoregister controller
I1024 15:30:56.426743 1 cache.go:32] Waiting for caches to sync for autoregister controller
I1024 15:30:56.432010 1 controller.go:85] Starting OpenAPI controller
I1024 15:30:56.432199 1 customresource_discovery_controller.go:208] Starting DiscoveryController
I1024 15:30:56.432300 1 naming_controller.go:288] Starting NamingConditionController
I1024 15:30:56.432395 1 establishing_controller.go:73] Starting EstablishingController
I1024 15:30:56.432477 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I1024 15:30:56.432564 1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1024 15:30:56.432673 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I1024 15:30:56.432739 1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
E1024 15:30:56.432053 1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.0.85, ResourceVersion: 0, AdditionalErrorMsg:
I1024 15:30:56.525465 1 cache.go:39] Caches are synced for AvailableConditionController controller
I1024 15:30:56.525825 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1024 15:30:56.526841 1 cache.go:39] Caches are synced for autoregister controller
I1024 15:30:56.536303 1 shared_informer.go:204] Caches are synced for crd-autoregister
I1024 15:30:56.544256 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I1024 15:30:57.425614 1 controller.go:107] OpenAPI AggregationController: Processing item
I1024 15:30:57.425702 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1024 15:30:57.425764 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1024 15:30:57.438030 1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I1024 15:30:57.453990 1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I1024 15:30:57.454093 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I1024 15:30:59.040119 1 controller.go:606] quota admission added evaluator for: endpoints
I1024 15:30:59.207442 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1024 15:30:59.487478 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W1024 15:30:59.868707 1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.0.85]
I1024 15:31:00.304902 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I1024 15:31:01.052638 1 controller.go:606] quota admission added evaluator for: deployments.apps
I1024 15:31:01.394193 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I1024 15:31:06.870403 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I1024 15:31:06.885253 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I1024 15:31:06.891836 1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
==> kube-controller-manager [8912a2cbd966] <==
I1024 15:31:05.521433 1 shared_informer.go:197] Waiting for caches to sync for ReplicationController
I1024 15:31:06.327273 1 controllermanager.go:534] Started "garbagecollector"
I1024 15:31:06.327911 1 garbagecollector.go:130] Starting garbage collector controller
I1024 15:31:06.327943 1 shared_informer.go:197] Waiting for caches to sync for garbage collector
I1024 15:31:06.327965 1 graph_builder.go:282] GraphBuilder running
I1024 15:31:06.367800 1 controllermanager.go:534] Started "deployment"
I1024 15:31:06.368018 1 deployment_controller.go:152] Starting deployment controller
I1024 15:31:06.368046 1 shared_informer.go:197] Waiting for caches to sync for deployment
I1024 15:31:06.394083 1 controllermanager.go:534] Started "csrsigning"
I1024 15:31:06.394328 1 certificate_controller.go:113] Starting certificate controller
I1024 15:31:06.394353 1 shared_informer.go:197] Waiting for caches to sync for certificate
I1024 15:31:06.575326 1 controllermanager.go:534] Started "tokencleaner"
I1024 15:31:06.575501 1 tokencleaner.go:117] Starting token cleaner controller
I1024 15:31:06.577197 1 shared_informer.go:197] Waiting for caches to sync for token_cleaner
I1024 15:31:06.678254 1 shared_informer.go:204] Caches are synced for token_cleaner
I1024 15:31:06.721237 1 node_lifecycle_controller.go:77] Sending events to api server
E1024 15:31:06.721364 1 core.go:201] failed to start cloud node lifecycle controller: no cloud provider provided
W1024 15:31:06.721412 1 controllermanager.go:526] Skipping "cloud-node-lifecycle"
I1024 15:31:06.722720 1 shared_informer.go:197] Waiting for caches to sync for resource quota
I1024 15:31:06.803275 1 shared_informer.go:204] Caches are synced for namespace
I1024 15:31:06.816914 1 shared_informer.go:204] Caches are synced for HPA
I1024 15:31:06.818236 1 shared_informer.go:204] Caches are synced for endpoint
I1024 15:31:06.818901 1 shared_informer.go:204] Caches are synced for ReplicaSet
I1024 15:31:06.821785 1 shared_informer.go:204] Caches are synced for ReplicationController
I1024 15:31:06.825737 1 shared_informer.go:204] Caches are synced for job
I1024 15:31:06.868175 1 shared_informer.go:204] Caches are synced for deployment
I1024 15:31:06.872060 1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"cbccb9bc-a977-49c2-8482-358116d357d5", APIVersion:"apps/v1", ResourceVersion:"208", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-5644d7b6d9 to 2
I1024 15:31:06.872318 1 shared_informer.go:204] Caches are synced for GC
W1024 15:31:06.875208 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I1024 15:31:06.875691 1 shared_informer.go:204] Caches are synced for TTL
I1024 15:31:06.876648 1 shared_informer.go:204] Caches are synced for service account
I1024 15:31:06.880547 1 shared_informer.go:204] Caches are synced for daemon sets
I1024 15:31:06.888290 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator
I1024 15:31:06.890804 1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"85857bc6-1d9a-41dc-84fb-f9cac4f9eeb9", APIVersion:"apps/v1", ResourceVersion:"302", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-p42s6
I1024 15:31:06.897088 1 shared_informer.go:204] Caches are synced for certificate
I1024 15:31:06.897198 1 shared_informer.go:204] Caches are synced for certificate
I1024 15:31:06.903133 1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"85857bc6-1d9a-41dc-84fb-f9cac4f9eeb9", APIVersion:"apps/v1", ResourceVersion:"302", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-6wbrz
I1024 15:31:06.911908 1 shared_informer.go:204] Caches are synced for taint
I1024 15:31:06.911904 1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"3de803c8-e289-406c-917a-5229481d536f", APIVersion:"apps/v1", ResourceVersion:"221", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-lnrml
I1024 15:31:06.912127 1 taint_manager.go:186] Starting NoExecuteTaintManager
I1024 15:31:06.912297 1 node_lifecycle_controller.go:1208] Initializing eviction metric for zone:
W1024 15:31:06.912425 1 node_lifecycle_controller.go:903] Missing timestamp for Node minikube. Assuming now as a timestamp.
I1024 15:31:06.912462 1 node_lifecycle_controller.go:1108] Controller detected that zone is now in state Normal.
I1024 15:31:06.912579 1 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"08086b72-0a11-469e-83cf-bd610ab61c2e", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
E1024 15:31:06.925568 1 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
I1024 15:31:07.075092 1 shared_informer.go:204] Caches are synced for bootstrap_signer
I1024 15:31:07.219807 1 shared_informer.go:204] Caches are synced for PVC protection
I1024 15:31:07.219834 1 shared_informer.go:204] Caches are synced for stateful set
I1024 15:31:07.222961 1 shared_informer.go:204] Caches are synced for resource quota
I1024 15:31:07.249834 1 shared_informer.go:204] Caches are synced for resource quota
I1024 15:31:07.267671 1 shared_informer.go:204] Caches are synced for disruption
I1024 15:31:07.267692 1 disruption.go:341] Sending events to api server.
I1024 15:31:07.275067 1 shared_informer.go:204] Caches are synced for attach detach
I1024 15:31:07.323863 1 shared_informer.go:204] Caches are synced for PV protection
I1024 15:31:07.328122 1 shared_informer.go:204] Caches are synced for garbage collector
I1024 15:31:07.328144 1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1024 15:31:07.350546 1 shared_informer.go:204] Caches are synced for persistent volume
I1024 15:31:07.374855 1 shared_informer.go:204] Caches are synced for expand
I1024 15:31:07.822565 1 shared_informer.go:197] Waiting for caches to sync for garbage collector
I1024 15:31:07.922721 1 shared_informer.go:204] Caches are synced for garbage collector
==> kube-proxy [0e9ae3fdb2b3] <==
W1024 15:31:09.388009 1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
I1024 15:31:09.417949 1 node.go:135] Successfully retrieved node IP: 192.168.0.85
I1024 15:31:09.418015 1 server_others.go:149] Using iptables Proxier.
W1024 15:31:09.418892 1 proxier.go:287] clusterCIDR not specified, unable to distinguish between internal and external traffic
I1024 15:31:09.419830 1 server.go:529] Version: v1.16.0
I1024 15:31:09.421921 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I1024 15:31:09.422390 1 config.go:131] Starting endpoints config controller
I1024 15:31:09.422434 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I1024 15:31:09.424284 1 config.go:313] Starting service config controller
I1024 15:31:09.424327 1 shared_informer.go:197] Waiting for caches to sync for service config
I1024 15:31:09.522628 1 shared_informer.go:204] Caches are synced for endpoints config
I1024 15:31:09.525169 1 shared_informer.go:204] Caches are synced for service config
==> kube-scheduler [7a0b4118f99e] <==
I1024 15:30:54.053784 1 serving.go:319] Generated self-signed cert in-memory
W1024 15:30:56.499835 1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1024 15:30:56.501900 1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1024 15:30:56.501975 1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
W1024 15:30:56.502000 1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1024 15:30:56.511048 1 server.go:143] Version: v1.16.0
I1024 15:30:56.511140 1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W1024 15:30:56.512997 1 authorization.go:47] Authorization is disabled
W1024 15:30:56.513018 1 authentication.go:79] Authentication is disabled
I1024 15:30:56.513031 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I1024 15:30:56.513624 1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
E1024 15:30:56.572003 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1024 15:30:56.573629 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1024 15:30:56.573751 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1024 15:30:56.573944 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1024 15:30:56.574105 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1024 15:30:56.578222 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1024 15:30:56.582410 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1024 15:30:56.582484 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1024 15:30:56.582600 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1024 15:30:56.582758 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1024 15:30:56.582895 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1024 15:30:57.577608 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1024 15:30:57.579959 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1024 15:30:57.582742 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1024 15:30:57.585583 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1024 15:30:57.588495 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1024 15:30:57.592103 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1024 15:30:57.597169 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1024 15:30:57.599213 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1024 15:30:57.608174 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1024 15:30:57.611517 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1024 15:30:57.614821 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
I1024 15:30:59.516317 1 leaderelection.go:241] attempting to acquire leader lease kube-system/kube-scheduler...
I1024 15:30:59.521289 1 leaderelection.go:251] successfully acquired lease kube-system/kube-scheduler
E1024 15:31:06.899192 1 factory.go:585] pod is already present in the activeQ
==> kubelet <==
-- Logs begin at Wed 2019-06-19 15:34:35 CEST, end at Thu 2019-10-24 17:36:02 CEST. --
oct. 24 17:30:55 ubuntu kubelet[12385]: E1024 17:30:55.329825 12385 kubelet.go:2267] node "minikube" not found
oct. 24 17:30:55 ubuntu kubelet[12385]: E1024 17:30:55.430246 12385 kubelet.go:2267] node "minikube" not found
oct. 24 17:30:55 ubuntu kubelet[12385]: E1024 17:30:55.530356 12385 kubelet.go:2267] node "minikube" not found
oct. 24 17:30:55 ubuntu kubelet[12385]: E1024 17:30:55.630851 12385 kubelet.go:2267] node "minikube" not found
oct. 24 17:30:55 ubuntu kubelet[12385]: E1024 17:30:55.731023 12385 kubelet.go:2267] node "minikube" not found
oct. 24 17:30:55 ubuntu kubelet[12385]: E1024 17:30:55.831321 12385 kubelet.go:2267] node "minikube" not found
oct. 24 17:30:55 ubuntu kubelet[12385]: E1024 17:30:55.931880 12385 kubelet.go:2267] node "minikube" not found
oct. 24 17:30:56 ubuntu kubelet[12385]: E1024 17:30:56.032032 12385 kubelet.go:2267] node "minikube" not found
oct. 24 17:30:56 ubuntu kubelet[12385]: E1024 17:30:56.132209 12385 kubelet.go:2267] node "minikube" not found
oct. 24 17:30:56 ubuntu kubelet[12385]: E1024 17:30:56.232690 12385 kubelet.go:2267] node "minikube" not found
oct. 24 17:30:56 ubuntu kubelet[12385]: E1024 17:30:56.332840 12385 kubelet.go:2267] node "minikube" not found
oct. 24 17:30:56 ubuntu kubelet[12385]: E1024 17:30:56.432957 12385 kubelet.go:2267] node "minikube" not found
oct. 24 17:30:56 ubuntu kubelet[12385]: E1024 17:30:56.490283 12385 controller.go:220] failed to get node "minikube" when trying to set owner ref to the node lease: nodes "minikube" not found
oct. 24 17:30:56 ubuntu kubelet[12385]: I1024 17:30:56.532986 12385 reconciler.go:154] Reconciler: start to sync state
oct. 24 17:30:56 ubuntu kubelet[12385]: E1024 17:30:56.533037 12385 kubelet.go:2267] node "minikube" not found
oct. 24 17:30:56 ubuntu kubelet[12385]: E1024 17:30:56.551005 12385 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15d09ef659f57e59", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf648ecab34d5a59, ext:3409839233, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf648ecab34d5a59, ext:3409839233, loc:(*time.Location)(0x797f100)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
oct. 24 17:30:56 ubuntu kubelet[12385]: I1024 17:30:56.563050 12385 kubelet_node_status.go:75] Successfully registered node minikube
oct. 24 17:30:56 ubuntu kubelet[12385]: E1024 17:30:56.614643 12385 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15d09ef66992dbd5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf648ecac74fedd5, ext:3671810464, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf648ecac74fedd5, ext:3671810464, loc:(*time.Location)(0x797f100)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
oct. 24 17:30:56 ubuntu kubelet[12385]: E1024 17:30:56.667460 12385 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15d09ef66992f55f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf648ecac750075f, ext:3671817004, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf648ecac750075f, ext:3671817004, loc:(*time.Location)(0x797f100)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
oct. 24 17:30:56 ubuntu kubelet[12385]: E1024 17:30:56.720476 12385 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15d09ef66992fe88", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf648ecac7501088, ext:3671819350, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf648ecac7501088, ext:3671819350, loc:(*time.Location)(0x797f100)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
oct. 24 17:30:56 ubuntu kubelet[12385]: E1024 17:30:56.780208 12385 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15d09ef66992dbd5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf648ecac74fedd5, ext:3671810464, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf648ecac76b12b2, ext:3673589375, loc:(*time.Location)(0x797f100)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
oct. 24 17:30:56 ubuntu kubelet[12385]: E1024 17:30:56.845903 12385 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15d09ef66992f55f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf648ecac750075f, ext:3671817004, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf648ecac76b24cb, ext:3673594007, loc:(*time.Location)(0x797f100)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
oct. 24 17:30:56 ubuntu kubelet[12385]: E1024 17:30:56.916641 12385 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15d09ef66992fe88", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf648ecac7501088, ext:3671819350, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf648ecac76b2e49, ext:3673596437, loc:(*time.Location)(0x797f100)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
oct. 24 17:30:56 ubuntu kubelet[12385]: W1024 17:30:56.922768 12385 kubelet_getters.go:292] Path "/var/lib/kubelet/pods/521c6b8c-4afd-48b4-9a5f-683b91202893/volumes" does not exist
oct. 24 17:30:56 ubuntu kubelet[12385]: W1024 17:30:56.922925 12385 kubelet_getters.go:292] Path "/var/lib/kubelet/pods/e8d95ac5-2633-4988-b741-229b2c751e5f/volumes" does not exist
oct. 24 17:30:56 ubuntu kubelet[12385]: W1024 17:30:56.923172 12385 kubelet_getters.go:292] Path "/var/lib/kubelet/pods/1b73bef0-d1d7-4e93-9234-436aa3c9d9b3/volumes" does not exist
oct. 24 17:30:56 ubuntu kubelet[12385]: W1024 17:30:56.931732 12385 kubelet_getters.go:292] Path "/var/lib/kubelet/pods/1b812cb4-9028-44df-8006-44ec6e7d12f9/volumes" does not exist
oct. 24 17:30:56 ubuntu kubelet[12385]: E1024 17:30:56.975036 12385 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15d09ef669c4a0f8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf648ecac781b2f8, ext:3675072196, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf648ecac781b2f8, ext:3675072196, loc:(*time.Location)(0x797f100)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
oct. 24 17:30:57 ubuntu kubelet[12385]: E1024 17:30:57.040100 12385 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15d09ef66992dbd5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf648ecac74fedd5, ext:3671810464, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf648ecad1cd43d9, ext:3847796645, loc:(*time.Location)(0x797f100)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
oct. 24 17:30:57 ubuntu kubelet[12385]: E1024 17:30:57.404600 12385 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15d09ef66992f55f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf648ecac750075f, ext:3671817004, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf648ecad1cd5e66, ext:3847803449, loc:(*time.Location)(0x797f100)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
oct. 24 17:30:57 ubuntu kubelet[12385]: E1024 17:30:57.810955 12385 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15d09ef66992fe88", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf648ecac7501088, ext:3671819350, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf648ecad1cd710b, ext:3847808215, loc:(*time.Location)(0x797f100)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
oct. 24 17:30:58 ubuntu kubelet[12385]: E1024 17:30:58.221277 12385 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15d09ef66992dbd5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf648ecac74fedd5, ext:3671810464, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf648ecada6778e2, ext:3992120498, loc:(*time.Location)(0x797f100)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
oct. 24 17:30:58 ubuntu kubelet[12385]: E1024 17:30:58.605325 12385 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15d09ef66992f55f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf648ecac750075f, ext:3671817004, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf648ecada67de70, ext:3992146494, loc:(*time.Location)(0x797f100)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
oct. 24 17:30:59 ubuntu kubelet[12385]: E1024 17:30:59.008091 12385 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15d09ef66992fe88", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf648ecac7501088, ext:3671819350, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf648ecada67ffbb, ext:3992155016, loc:(*time.Location)(0x797f100)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
oct. 24 17:30:59 ubuntu kubelet[12385]: E1024 17:30:59.404313 12385 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15d09ef66992dbd5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf648ecac74fedd5, ext:3671810464, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf648ecada74427a, ext:3992958540, loc:(*time.Location)(0x797f100)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
oct. 24 17:31:06 ubuntu kubelet[12385]: I1024 17:31:06.972027 12385 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/067a55a5-3fab-4150-bcf0-c8a85e197be8-xtables-lock") pod "kube-proxy-lnrml" (UID: "067a55a5-3fab-4150-bcf0-c8a85e197be8")
oct. 24 17:31:06 ubuntu kubelet[12385]: I1024 17:31:06.972073 12385 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/067a55a5-3fab-4150-bcf0-c8a85e197be8-lib-modules") pod "kube-proxy-lnrml" (UID: "067a55a5-3fab-4150-bcf0-c8a85e197be8")
oct. 24 17:31:06 ubuntu kubelet[12385]: I1024 17:31:06.972093 12385 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ad7582f7-6b8e-4e88-a796-81328bc108b6-config-volume") pod "coredns-5644d7b6d9-p42s6" (UID: "ad7582f7-6b8e-4e88-a796-81328bc108b6")
oct. 24 17:31:06 ubuntu kubelet[12385]: I1024 17:31:06.972121 12385 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-pbzzp" (UniqueName: "kubernetes.io/secret/ad7582f7-6b8e-4e88-a796-81328bc108b6-coredns-token-pbzzp") pod "coredns-5644d7b6d9-p42s6" (UID: "ad7582f7-6b8e-4e88-a796-81328bc108b6")
oct. 24 17:31:06 ubuntu kubelet[12385]: I1024 17:31:06.972148 12385 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/067a55a5-3fab-4150-bcf0-c8a85e197be8-kube-proxy") pod "kube-proxy-lnrml" (UID: "067a55a5-3fab-4150-bcf0-c8a85e197be8")
oct. 24 17:31:06 ubuntu kubelet[12385]: I1024 17:31:06.972177 12385 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-smsc4" (UniqueName: "kubernetes.io/secret/067a55a5-3fab-4150-bcf0-c8a85e197be8-kube-proxy-token-smsc4") pod "kube-proxy-lnrml" (UID: "067a55a5-3fab-4150-bcf0-c8a85e197be8")
oct. 24 17:31:07 ubuntu kubelet[12385]: I1024 17:31:07.072510 12385 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ee930b0f-9326-4c80-b884-ebea4e634d55-config-volume") pod "coredns-5644d7b6d9-6wbrz" (UID: "ee930b0f-9326-4c80-b884-ebea4e634d55")
oct. 24 17:31:07 ubuntu kubelet[12385]: I1024 17:31:07.073026 12385 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-pbzzp" (UniqueName: "kubernetes.io/secret/ee930b0f-9326-4c80-b884-ebea4e634d55-coredns-token-pbzzp") pod "coredns-5644d7b6d9-6wbrz" (UID: "ee930b0f-9326-4c80-b884-ebea4e634d55")
oct. 24 17:31:08 ubuntu kubelet[12385]: W1024 17:31:08.675001 12385 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-p42s6 through plugin: invalid network status for
oct. 24 17:31:08 ubuntu kubelet[12385]: W1024 17:31:08.784710 12385 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-6wbrz through plugin: invalid network status for
oct. 24 17:31:08 ubuntu kubelet[12385]: W1024 17:31:08.784993 12385 pod_container_deletor.go:75] Container "c37d2e567d0ffc906608ebe8ec4115c5155365fc36af33f3b74fc62a4842d7e7" not found in pod's containers
oct. 24 17:31:08 ubuntu kubelet[12385]: W1024 17:31:08.796835 12385 pod_container_deletor.go:75] Container "12b7edbc741fbef95df469a22e6c5f061d734b7662b595ffb528890c20493391" not found in pod's containers
oct. 24 17:31:08 ubuntu kubelet[12385]: W1024 17:31:08.808136 12385 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-p42s6 through plugin: invalid network status for
oct. 24 17:31:08 ubuntu kubelet[12385]: E1024 17:31:08.810722 12385 remote_runtime.go:295] ContainerStatus "724660d5cd9495c4836d3cbee584aea9443449744ccb3bd1a4b744d83ca123ed" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 724660d5cd9495c4836d3cbee584aea9443449744ccb3bd1a4b744d83ca123ed
oct. 24 17:31:08 ubuntu kubelet[12385]: E1024 17:31:08.810899 12385 kuberuntime_manager.go:935] getPodContainerStatuses for pod "coredns-5644d7b6d9-p42s6_kube-system(ad7582f7-6b8e-4e88-a796-81328bc108b6)" failed: rpc error: code = Unknown desc = Error: No such container: 724660d5cd9495c4836d3cbee584aea9443449744ccb3bd1a4b744d83ca123ed
oct. 24 17:31:09 ubuntu kubelet[12385]: I1024 17:31:09.311467 12385 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-5f9dx" (UniqueName: "kubernetes.io/secret/c53ea473-aa83-4b9f-bf59-e5e24a84da0f-storage-provisioner-token-5f9dx") pod "storage-provisioner" (UID: "c53ea473-aa83-4b9f-bf59-e5e24a84da0f")
oct. 24 17:31:09 ubuntu kubelet[12385]: I1024 17:31:09.311533 12385 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/c53ea473-aa83-4b9f-bf59-e5e24a84da0f-tmp") pod "storage-provisioner" (UID: "c53ea473-aa83-4b9f-bf59-e5e24a84da0f")
oct. 24 17:31:09 ubuntu kubelet[12385]: W1024 17:31:09.469419 12385 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-r8fad7949dd9a4bcf8df37a6c4bce97bd.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-r8fad7949dd9a4bcf8df37a6c4bce97bd.scope: no such file or directory
oct. 24 17:31:09 ubuntu kubelet[12385]: W1024 17:31:09.469640 12385 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-r8fad7949dd9a4bcf8df37a6c4bce97bd.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-r8fad7949dd9a4bcf8df37a6c4bce97bd.scope: no such file or directory
oct. 24 17:31:09 ubuntu kubelet[12385]: W1024 17:31:09.469667 12385 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-r8fad7949dd9a4bcf8df37a6c4bce97bd.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-r8fad7949dd9a4bcf8df37a6c4bce97bd.scope: no such file or directory
oct. 24 17:31:09 ubuntu kubelet[12385]: W1024 17:31:09.469683 12385 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-r8fad7949dd9a4bcf8df37a6c4bce97bd.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-r8fad7949dd9a4bcf8df37a6c4bce97bd.scope: no such file or directory
oct. 24 17:31:09 ubuntu kubelet[12385]: W1024 17:31:09.469700 12385 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-r8fad7949dd9a4bcf8df37a6c4bce97bd.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-r8fad7949dd9a4bcf8df37a6c4bce97bd.scope: no such file or directory
oct. 24 17:31:09 ubuntu kubelet[12385]: W1024 17:31:09.940599 12385 pod_container_deletor.go:75] Container "0b6720cac1cc222294ce48b41a9aab121684563d1bbabffb427e2dbcad9fec7e" not found in pod's containers
oct. 24 17:31:09 ubuntu kubelet[12385]: W1024 17:31:09.957702 12385 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-6wbrz through plugin: invalid network status for
oct. 24 17:31:09 ubuntu kubelet[12385]: W1024 17:31:09.996074 12385 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-p42s6 through plugin: invalid network status for
==> storage-provisioner [4536af08e34c] <==
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment