Skip to content

Instantly share code, notes, and snippets.

@bot11
Created November 15, 2017 13:06
Show Gist options
  • Save bot11/dd298851daa3c49fe32d038a9fea8394 to your computer and use it in GitHub Desktop.
Save bot11/dd298851daa3c49fe32d038a9fea8394 to your computer and use it in GitHub Desktop.
kubelet logs
This file has been truncated, but you can view the full file.
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228459 27740 flags.go:52] FLAG: --register-schedulable="true"
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228465 27740 flags.go:52] FLAG: --register-with-taints="<nil>"
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228472 27740 flags.go:52] FLAG: --registry-burst="10"
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228476 27740 flags.go:52] FLAG: --registry-qps="5"
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228482 27740 flags.go:52] FLAG: --require-kubeconfig="false"
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228486 27740 flags.go:52] FLAG: --resolv-conf="/etc/resolv.conf"
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228492 27740 flags.go:52] FLAG: --rkt-api-endpoint="localhost:15441"
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228497 27740 flags.go:52] FLAG: --rkt-path=""
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228501 27740 flags.go:52] FLAG: --rkt-stage1-image=""
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228506 27740 flags.go:52] FLAG: --root-dir="/var/lib/kubelet"
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228512 27740 flags.go:52] FLAG: --rotate-certificates="true"
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228516 27740 flags.go:52] FLAG: --runonce="false"
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228521 27740 flags.go:52] FLAG: --runtime-cgroups=""
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228525 27740 flags.go:52] FLAG: --runtime-request-timeout="2m0s"
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228530 27740 flags.go:52] FLAG: --seccomp-profile-root="/var/lib/kubelet/seccomp"a
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228536 27740 flags.go:52] FLAG: --serialize-image-pulls="true"
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228541 27740 flags.go:52] FLAG: --stderrthreshold="2"
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228547 27740 flags.go:52] FLAG: --storage-driver-buffer-duration="1m0s"
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228552 27740 flags.go:52] FLAG: --storage-driver-db="cadvisor"
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228556 27740 flags.go:52] FLAG: --storage-driver-host="localhost:8086"
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228561 27740 flags.go:52] FLAG: --storage-driver-password="root"
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228565 27740 flags.go:52] FLAG: --storage-driver-secure="false"
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228570 27740 flags.go:52] FLAG: --storage-driver-table="stats"
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228575 27740 flags.go:52] FLAG: --storage-driver-user="root"
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228580 27740 flags.go:52] FLAG: --streaming-connection-idle-timeout="4h0m0s"
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228585 27740 flags.go:52] FLAG: --sync-frequency="1m0s"
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228590 27740 flags.go:52] FLAG: --system-cgroups=""
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228594 27740 flags.go:52] FLAG: --system-reserved=""
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228599 27740 flags.go:52] FLAG: --system-reserved-cgroup=""
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228604 27740 flags.go:52] FLAG: --tls-cert-file=""
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228609 27740 flags.go:52] FLAG: --tls-private-key-file=""
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228613 27740 flags.go:52] FLAG: --v="4"
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228618 27740 flags.go:52] FLAG: --version="false"
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228627 27740 flags.go:52] FLAG: --vmodule=""
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228632 27740 flags.go:52] FLAG: --volume-plugin-dir="/usr/libexec/kubernetes/kubelet-plugins/volume/exec/"
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228637 27740 flags.go:52] FLAG: --volume-stats-agg-period="1m0s"
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228652 27740 feature_gate.go:156] feature gates: map[]
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228675 27740 controller.go:114] kubelet config controller: starting controller
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.228681 27740 controller.go:118] kubelet config controller: validating combination of defaults and flags
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.968919 27740 server.go:564] Using self-signed cert (/var/lib/kubelet/pki/kubelet.crt, /var/lib/kubelet/pki/kubelet.key)
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.975253 27740 mount_linux.go:168] Detected OS with systemd
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.975273 27740 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.975292 27740 client.go:95] Start docker client with request timeout=2m0s
Nov 15 01:58:39 af867b kubelet[27740]: W1115 01:58:39.977808 27740 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.983386 27740 iptables.go:564] couldn't get iptables-restore version; assuming it doesn't support --wait
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.985167 27740 feature_gate.go:156] feature gates: map[]
Nov 15 01:58:39 af867b kubelet[27740]: W1115 01:58:39.985334 27740 server.go:289] --cloud-provider=auto-detect is deprecated. The desired cloud provider should be set explicitly
Nov 15 01:58:39 af867b kubelet[27740]: I1115 01:58:39.985370 27740 bootstrap.go:57] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file
Nov 15 01:58:39 af867b kubelet[27740]: error: failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory
Nov 15 01:58:39 af867b systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
Nov 15 01:58:39 af867b systemd[1]: Unit kubelet.service entered failed state.
Nov 15 01:58:39 af867b systemd[1]: kubelet.service failed.
Nov 15 01:58:50 af867b systemd[1]: kubelet.service holdoff time over, scheduling restart.
Nov 15 01:58:50 af867b systemd[1]: Started kubelet: The Kubernetes Node Agent.
Nov 15 01:58:50 af867b systemd[1]: Starting kubelet: The Kubernetes Node Agent...
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160304 27751 flags.go:52] FLAG: --address="0.0.0.0"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160386 27751 flags.go:52] FLAG: --allow-privileged="true"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160395 27751 flags.go:52] FLAG: --alsologtostderr="false"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160404 27751 flags.go:52] FLAG: --anonymous-auth="true"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160408 27751 flags.go:52] FLAG: --application-metrics-count-limit="100"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160413 27751 flags.go:52] FLAG: --authentication-token-webhook="false"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160418 27751 flags.go:52] FLAG: --authentication-token-webhook-cache-ttl="2m0s"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160426 27751 flags.go:52] FLAG: --authorization-mode="Webhook"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160433 27751 flags.go:52] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160438 27751 flags.go:52] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160442 27751 flags.go:52] FLAG: --azure-container-registry-config=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160447 27751 flags.go:52] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160453 27751 flags.go:52] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/bootstrap-kubelet.conf"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160458 27751 flags.go:52] FLAG: --cadvisor-port="0"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160466 27751 flags.go:52] FLAG: --cert-dir="/var/lib/kubelet/pki"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160470 27751 flags.go:52] FLAG: --cgroup-driver="cgroupfs"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160475 27751 flags.go:52] FLAG: --cgroup-root=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160479 27751 flags.go:52] FLAG: --cgroups-per-qos="true"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160483 27751 flags.go:52] FLAG: --chaos-chance="0"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160492 27751 flags.go:52] FLAG: --client-ca-file="/etc/kubernetes/pki/ca.crt"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160498 27751 flags.go:52] FLAG: --cloud-config=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160502 27751 flags.go:52] FLAG: --cloud-provider="auto-detect"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160507 27751 flags.go:52] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,35.191.0.0/16,209.85.152.0/22,209.85.204.0/22"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160515 27751 flags.go:52] FLAG: --cluster-dns="[10.96.0.10]"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160529 27751 flags.go:52] FLAG: --cluster-domain="cluster.local"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160535 27751 flags.go:52] FLAG: --cni-bin-dir="/opt/cni/bin"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160540 27751 flags.go:52] FLAG: --cni-conf-dir="/etc/cni/net.d"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160546 27751 flags.go:52] FLAG: --container-hints="/etc/cadvisor/container_hints.json"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160552 27751 flags.go:52] FLAG: --container-runtime="docker"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160557 27751 flags.go:52] FLAG: --container-runtime-endpoint="unix:///var/run/dockershim.sock"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160562 27751 flags.go:52] FLAG: --containerized="false"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160566 27751 flags.go:52] FLAG: --contention-profiling="false"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160570 27751 flags.go:52] FLAG: --cpu-cfs-quota="true"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160575 27751 flags.go:52] FLAG: --cpu-manager-policy="none"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160579 27751 flags.go:52] FLAG: --cpu-manager-reconcile-period="10s"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160584 27751 flags.go:52] FLAG: --docker="unix:///var/run/docker.sock"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160588 27751 flags.go:52] FLAG: --docker-disable-shared-pid="true"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160592 27751 flags.go:52] FLAG: --docker-endpoint="unix:///var/run/docker.sock"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160598 27751 flags.go:52] FLAG: --docker-env-metadata-whitelist=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160602 27751 flags.go:52] FLAG: --docker-exec-handler="native"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160606 27751 flags.go:52] FLAG: --docker-only="false"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160610 27751 flags.go:52] FLAG: --docker-root="/var/lib/docker"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160614 27751 flags.go:52] FLAG: --docker-tls="false"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160618 27751 flags.go:52] FLAG: --docker-tls-ca="ca.pem"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160622 27751 flags.go:52] FLAG: --docker-tls-cert="cert.pem"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160626 27751 flags.go:52] FLAG: --docker-tls-key="key.pem"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160630 27751 flags.go:52] FLAG: --dynamic-config-dir=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160638 27751 flags.go:52] FLAG: --enable-controller-attach-detach="true"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160642 27751 flags.go:52] FLAG: --enable-custom-metrics="false"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160646 27751 flags.go:52] FLAG: --enable-debugging-handlers="true"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160653 27751 flags.go:52] FLAG: --enable-load-reader="false"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160657 27751 flags.go:52] FLAG: --enable-server="true"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160662 27751 flags.go:52] FLAG: --enforce-node-allocatable="[pods]"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160674 27751 flags.go:52] FLAG: --event-burst="10"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160680 27751 flags.go:52] FLAG: --event-qps="5"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160685 27751 flags.go:52] FLAG: --event-storage-age-limit="default=0"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160690 27751 flags.go:52] FLAG: --event-storage-event-limit="default=0"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160695 27751 flags.go:52] FLAG: --eviction-hard="memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160700 27751 flags.go:52] FLAG: --eviction-max-pod-grace-period="0"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160705 27751 flags.go:52] FLAG: --eviction-minimum-reclaim=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160743 27751 flags.go:52] FLAG: --eviction-pressure-transition-period="5m0s"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160750 27751 flags.go:52] FLAG: --eviction-soft=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160754 27751 flags.go:52] FLAG: --eviction-soft-grace-period=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160758 27751 flags.go:52] FLAG: --exit-on-lock-contention="false"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160763 27751 flags.go:52] FLAG: --experimental-allocatable-ignore-eviction="false"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160767 27751 flags.go:52] FLAG: --experimental-allowed-unsafe-sysctls="[]"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160775 27751 flags.go:52] FLAG: --experimental-bootstrap-kubeconfig="/etc/kubernetes/bootstrap-kubelet.conf"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160781 27751 flags.go:52] FLAG: --experimental-check-node-capabilities-before-mount="false"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160786 27751 flags.go:52] FLAG: --experimental-dockershim="false"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160791 27751 flags.go:52] FLAG: --experimental-dockershim-root-directory="/var/lib/dockershim"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160806 27751 flags.go:52] FLAG: --experimental-fail-swap-on="false"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160811 27751 flags.go:52] FLAG: --experimental-kernel-memcg-notification="false"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160815 27751 flags.go:52] FLAG: --experimental-mounter-path=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160819 27751 flags.go:52] FLAG: --experimental-qos-reserved=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160828 27751 flags.go:52] FLAG: --fail-swap-on="false"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160833 27751 flags.go:52] FLAG: --feature-gates=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160837 27751 flags.go:52] FLAG: --file-check-frequency="20s"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160842 27751 flags.go:52] FLAG: --global-housekeeping-interval="1m0s"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160847 27751 flags.go:52] FLAG: --google-json-key=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160851 27751 flags.go:52] FLAG: --hairpin-mode="promiscuous-bridge"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160856 27751 flags.go:52] FLAG: --healthz-bind-address="127.0.0.1"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160860 27751 flags.go:52] FLAG: --healthz-port="10248"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160864 27751 flags.go:52] FLAG: --host-ipc-sources="[*]"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160876 27751 flags.go:52] FLAG: --host-network-sources="[*]"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160886 27751 flags.go:52] FLAG: --host-pid-sources="[*]"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160895 27751 flags.go:52] FLAG: --hostname-override=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160899 27751 flags.go:52] FLAG: --housekeeping-interval="10s"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160904 27751 flags.go:52] FLAG: --http-check-frequency="20s"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160909 27751 flags.go:52] FLAG: --image-gc-high-threshold="85"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160918 27751 flags.go:52] FLAG: --image-gc-low-threshold="80"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160922 27751 flags.go:52] FLAG: --image-pull-progress-deadline="1m0s"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160927 27751 flags.go:52] FLAG: --image-service-endpoint=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160931 27751 flags.go:52] FLAG: --init-config-dir=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160935 27751 flags.go:52] FLAG: --iptables-drop-bit="15"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160939 27751 flags.go:52] FLAG: --iptables-masquerade-bit="14"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160944 27751 flags.go:52] FLAG: --keep-terminated-pod-volumes="false"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160948 27751 flags.go:52] FLAG: --kube-api-burst="10"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160952 27751 flags.go:52] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160957 27751 flags.go:52] FLAG: --kube-api-qps="5"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160961 27751 flags.go:52] FLAG: --kube-reserved=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160966 27751 flags.go:52] FLAG: --kube-reserved-cgroup=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160970 27751 flags.go:52] FLAG: --kubeconfig="/etc/kubernetes/kubelet.conf"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160976 27751 flags.go:52] FLAG: --kubelet-cgroups=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160981 27751 flags.go:52] FLAG: --lock-file=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160985 27751 flags.go:52] FLAG: --log-backtrace-at=":0"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160991 27751 flags.go:52] FLAG: --log-cadvisor-usage="false"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.160997 27751 flags.go:52] FLAG: --log-dir=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161001 27751 flags.go:52] FLAG: --log-flush-frequency="5s"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161005 27751 flags.go:52] FLAG: --logtostderr="true"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161009 27751 flags.go:52] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161015 27751 flags.go:52] FLAG: --make-iptables-util-chains="true"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161019 27751 flags.go:52] FLAG: --manifest-url=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161023 27751 flags.go:52] FLAG: --manifest-url-header=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161027 27751 flags.go:52] FLAG: --master-service-namespace="default"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161031 27751 flags.go:52] FLAG: --max-open-files="1000000"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161038 27751 flags.go:52] FLAG: --max-pods="110"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161043 27751 flags.go:52] FLAG: --maximum-dead-containers="-1"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161047 27751 flags.go:52] FLAG: --maximum-dead-containers-per-container="1"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161053 27751 flags.go:52] FLAG: --minimum-container-ttl-duration="0s"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161057 27751 flags.go:52] FLAG: --minimum-image-ttl-duration="2m0s"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161062 27751 flags.go:52] FLAG: --network-plugin="cni"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161066 27751 flags.go:52] FLAG: --network-plugin-dir=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161070 27751 flags.go:52] FLAG: --network-plugin-mtu="0"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161075 27751 flags.go:52] FLAG: --node-ip=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161079 27751 flags.go:52] FLAG: --node-labels=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161087 27751 flags.go:52] FLAG: --node-status-update-frequency="10s"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161092 27751 flags.go:52] FLAG: --non-masquerade-cidr="10.0.0.0/8"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161096 27751 flags.go:52] FLAG: --oom-score-adj="-999"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161100 27751 flags.go:52] FLAG: --pod-cidr=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161104 27751 flags.go:52] FLAG: --pod-infra-container-image="gcr.io/google_containers/pause-amd64:3.0"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161109 27751 flags.go:52] FLAG: --pod-manifest-path="/etc/kubernetes/manifests"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161113 27751 flags.go:52] FLAG: --pods-per-core="0"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161118 27751 flags.go:52] FLAG: --port="10250"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161122 27751 flags.go:52] FLAG: --protect-kernel-defaults="false"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161126 27751 flags.go:52] FLAG: --provider-id=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161130 27751 flags.go:52] FLAG: --read-only-port="10255"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161134 27751 flags.go:52] FLAG: --really-crash-for-testing="false"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161138 27751 flags.go:52] FLAG: --register-node="true"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161143 27751 flags.go:52] FLAG: --register-schedulable="true"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161148 27751 flags.go:52] FLAG: --register-with-taints="<nil>"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161155 27751 flags.go:52] FLAG: --registry-burst="10"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161160 27751 flags.go:52] FLAG: --registry-qps="5"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161164 27751 flags.go:52] FLAG: --require-kubeconfig="false"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161169 27751 flags.go:52] FLAG: --resolv-conf="/etc/resolv.conf"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161173 27751 flags.go:52] FLAG: --rkt-api-endpoint="localhost:15441"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161177 27751 flags.go:52] FLAG: --rkt-path=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161182 27751 flags.go:52] FLAG: --rkt-stage1-image=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161187 27751 flags.go:52] FLAG: --root-dir="/var/lib/kubelet"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161192 27751 flags.go:52] FLAG: --rotate-certificates="true"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161197 27751 flags.go:52] FLAG: --runonce="false"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161201 27751 flags.go:52] FLAG: --runtime-cgroups=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161205 27751 flags.go:52] FLAG: --runtime-request-timeout="2m0s"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161211 27751 flags.go:52] FLAG: --seccomp-profile-root="/var/lib/kubelet/seccomp"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161217 27751 flags.go:52] FLAG: --serialize-image-pulls="true"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161221 27751 flags.go:52] FLAG: --stderrthreshold="2"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161226 27751 flags.go:52] FLAG: --storage-driver-buffer-duration="1m0s"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161231 27751 flags.go:52] FLAG: --storage-driver-db="cadvisor"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161236 27751 flags.go:52] FLAG: --storage-driver-host="localhost:8086"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161240 27751 flags.go:52] FLAG: --storage-driver-password="root"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161245 27751 flags.go:52] FLAG: --storage-driver-secure="false"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161250 27751 flags.go:52] FLAG: --storage-driver-table="stats"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161254 27751 flags.go:52] FLAG: --storage-driver-user="root"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161259 27751 flags.go:52] FLAG: --streaming-connection-idle-timeout="4h0m0s"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161263 27751 flags.go:52] FLAG: --sync-frequency="1m0s"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161268 27751 flags.go:52] FLAG: --system-cgroups=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161272 27751 flags.go:52] FLAG: --system-reserved=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161276 27751 flags.go:52] FLAG: --system-reserved-cgroup=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161280 27751 flags.go:52] FLAG: --tls-cert-file=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161284 27751 flags.go:52] FLAG: --tls-private-key-file=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161288 27751 flags.go:52] FLAG: --v="4"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161292 27751 flags.go:52] FLAG: --version="false"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161306 27751 flags.go:52] FLAG: --vmodule=""
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161311 27751 flags.go:52] FLAG: --volume-plugin-dir="/usr/libexec/kubernetes/kubelet-plugins/volume/exec/"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161318 27751 flags.go:52] FLAG: --volume-stats-agg-period="1m0s"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161337 27751 feature_gate.go:156] feature gates: map[]
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161373 27751 controller.go:114] kubelet config controller: starting controller
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.161379 27751 controller.go:118] kubelet config controller: validating combination of defaults and flags
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.201042 27751 mount_linux.go:168] Detected OS with systemd
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.201061 27751 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.201079 27751 client.go:95] Start docker client with request timeout=2m0s
Nov 15 01:58:50 af867b kubelet[27751]: W1115 01:58:50.202238 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.204753 27751 iptables.go:564] couldn't get iptables-restore version; assuming it doesn't support --wait
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.208194 27751 feature_gate.go:156] feature gates: map[]
Nov 15 01:58:50 af867b kubelet[27751]: W1115 01:58:50.208336 27751 server.go:289] --cloud-provider=auto-detect is deprecated. The desired cloud provider should be set explicitly
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.208379 27751 bootstrap.go:49] Kubeconfig /etc/kubernetes/kubelet.conf exists, skipping bootstrap
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.211728 27751 server.go:350] Starting client certificate rotation.
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.211751 27751 certificate_manager.go:192] Certificate rotation is enabled.
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.211762 27751 certificate_manager.go:322] Certificate rotation deadline is 2018-09-10 17:20:34.010209172 +0000 UTC
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.211779 27751 certificate_manager.go:200] shouldRotate() is true, forcing immediate rotation
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.211783 27751 certificate_manager.go:272] Rotating certificates
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.238095 27751 certificate_manager.go:361] Requesting new certificate.
Nov 15 01:58:50 af867b kubelet[27751]: E1115 01:58:50.238957 27751 certificate_manager.go:284] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post https://10.241.226.117:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.239616 27751 manager.go:149] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct/system.slice/kubelet.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.239812 27751 certificate_manager.go:214] Waiting 7191h21m43.770412212s for next certificate rotation
Nov 15 01:58:50 af867b kubelet[27751]: W1115 01:58:50.291243 27751 manager.go:157] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp [::1]:15441: getsockopt: connection refused
Nov 15 01:58:50 af867b kubelet[27751]: W1115 01:58:50.291366 27751 manager.go:166] unable to connect to CRI-O api service: Get http://%2Fvar%2Frun%2Fcrio.sock/info: dial unix /var/run/crio.sock: connect: no such file or directory
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.353207 27751 fs.go:139] Filesystem UUIDs: map[54b776b7-9cad-4499-83eb-44a283cbe533:/dev/dm-2 76116fc0-ac1e-4350-a196-8b2a40745a21:/dev/dm-0 938071eb-ffd8-471a-883b-a569092d96df:/dev/xvda1 bd4931fb-de32-4ef5-9b28-80c248c5732b:/dev/dm-1]
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.353247 27751 fs.go:140] Filesystem partitions: map[tmpfs:{mountpoint:/dev/shm major:0 minor:19 fsType:tmpfs blockSize:0} /dev/mapper/vg_main-lv_root:{mountpoint:/ major:251 minor:0 fsType:xfs blockSize:0} /dev/mapper/vg_main-lv_appVolume:{mountpoint:/u01/applicationSpace major:251 minor:2 fsType:ext4 blockSize:0} /dev/xvda1:{mountpoint:/boot major:202 minor:1 fsType:xfs blockSize:0} vg_main-lv_docker:{mountpoint: major:251 minor:4 fsType:devicemapper blockSize:1024}]
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.357147 27751 manager.go:216] Machine: {NumCores:2 CpuFrequency:2693566 MemoryCapacity:7570800640 HugePages:[{PageSize:2048 NumPages:0}] MachineID:971c640a83aa4477b9ced5d696a8368d SystemUUID:E17D452D-518E-4CA0-86A9-E10C39526A0E BootID:622e4334-9a44-481c-9fec-23e22f9c6fa7 Filesystems:[{Device:/dev/xvda1 DeviceMajor:202 DeviceMinor:1 Capacity:520794112 Type:vfs Inodes:512000 HasInodes:true} {Device:vg_main-lv_docker DeviceMajor:251 DeviceMinor:4 Capacity:48318382080 Type:devicemapper Inodes:0 HasInodes:false} {Device:tmpfs DeviceMajor:0 DeviceMinor:19 Capacity:3785400320 Type:vfs Inodes:924170 HasInodes:true} {Device:/dev/mapper/vg_main-lv_root DeviceMajor:251 DeviceMinor:0 Capacity:10693378048 Type:vfs Inodes:10452992 HasInodes:true} {Device:/dev/mapper/vg_main-lv_appVolume DeviceMajor:251 DeviceMinor:2 Capacity:21003628544 Type:vfs Inodes:1310720 HasInodes:true}] DiskMap:map[251:0:{Name:dm-0 Major:251 Minor:0 Size:10703863808 Scheduler:none} 251:1:{Name:dm-1 Major:251 Minor:1 Size:21218983936 Scheduler:none} 251:2:{Name:dm-2 Major:251 Minor:2 Size:21474836480 Scheduler:none} 251:3:{Name:dm-3 Major:251 Minor:3 Size:2147483648 Scheduler:none} 251:4:{Name:dm-4 Major:251 Minor:4 Size:48318382080 Scheduler:none} 251:5:{Name:dm-5 Major:251 Minor:5 Size:48318382080 Scheduler:none} 202:0:{Name:xvda Major:202 Minor:0 Size:107374182400 Scheduler:none}] NetworkDevices:[{Name:eth0 MacAddress:c6:b0:53:ea:c2:42 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:7570800640 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:31457280 Type:Unified Level:3}]} {Id:1 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:31457280 Type:Unified Level:3}]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.359394 27751 manager.go:222] Version: {KernelVersion:4.1.12-61.1.33.el7uek.x86_64 ContainerOsVersion:Oracle Linux Server 7.2 DockerVersion:17.03.1-ce DockerAPIVersion:1.27 CadvisorVersion: CadvisorRevision:}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.359935 27751 server.go:229] Sending events to api server.
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.360010 27751 server.go:422] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.361549 27751 container_manager_linux.go:252] container manager verified user specified cgroup-root exists: /
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.361570 27751 container_manager_linux.go:257] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.361681 27751 container_manager_linux.go:288] Creating device plugin handler: false
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.361744 27751 oom_linux.go:65] attempting to set "/proc/self/oom_score_adj" to "-999"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.361792 27751 server.go:686] Using root directory: /var/lib/kubelet
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.361827 27751 kubelet.go:273] Adding manifest file: /etc/kubernetes/manifests
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.361863 27751 file.go:52] Watching path "/etc/kubernetes/manifests"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.361884 27751 kubelet.go:283] Watching apiserver
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.365063 27751 reflector.go:202] Starting reflector *v1.Node (0s) from k8s.io/kubernetes/pkg/kubelet/kubelet.go:422
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.365115 27751 reflector.go:240] Listing and watching *v1.Node from k8s.io/kubernetes/pkg/kubelet/kubelet.go:422
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.365530 27751 reflector.go:202] Starting reflector *v1.Pod (0s) from k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.365543 27751 reflector.go:240] Listing and watching *v1.Pod from k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.365766 27751 reflector.go:202] Starting reflector *v1.Service (0s) from k8s.io/kubernetes/pkg/kubelet/kubelet.go:413
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.365778 27751 reflector.go:240] Listing and watching *v1.Service from k8s.io/kubernetes/pkg/kubelet/kubelet.go:413
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.362695 27751 file.go:161] Reading manifest file "/etc/kubernetes/manifests/etcd.yaml"
Nov 15 01:58:50 af867b kubelet[27751]: E1115 01:58:50.372898 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.241.226.117:6443/api/v1/pods?fieldSelector=spec.nodeName%3Daf867b&resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:50 af867b kubelet[27751]: E1115 01:58:50.372981 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.241.226.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Daf867b&resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:50 af867b kubelet[27751]: E1115 01:58:50.374346 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.241.226.117:6443/api/v1/services?resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.381809 27751 iptables.go:564] couldn't get iptables-restore version; assuming it doesn't support --wait
Nov 15 01:58:50 af867b kubelet[27751]: W1115 01:58:50.387510 27751 kubelet_network.go:69] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.387548 27751 kubelet.go:517] Hairpin mode set to "hairpin-veth"
Nov 15 01:58:50 af867b kubelet[27751]: W1115 01:58:50.387654 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.387674 27751 plugins.go:187] Loaded network plugin "cni"
Nov 15 01:58:50 af867b kubelet[27751]: W1115 01:58:50.390982 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.398376 27751 iptables.go:564] couldn't get iptables-restore version; assuming it doesn't support --wait
Nov 15 01:58:50 af867b kubelet[27751]: W1115 01:58:50.399259 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.399273 27751 plugins.go:187] Loaded network plugin "cni"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.399283 27751 docker_service.go:207] Docker cri networking managed by cni
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.430353 27751 file.go:161] Reading manifest file "/etc/kubernetes/manifests/kube-apiserver.yaml"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.431369 27751 file.go:161] Reading manifest file "/etc/kubernetes/manifests/kube-controller-manager.yaml"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.432377 27751 file.go:161] Reading manifest file "/etc/kubernetes/manifests/kube-scheduler.yaml"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.433839 27751 config.go:282] Setting pods for source file
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.434076 27751 config.go:404] Receiving a new pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.434098 27751 config.go:404] Receiving a new pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.434108 27751 config.go:404] Receiving a new pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.434117 27751 config.go:404] Receiving a new pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.449163 27751 docker_service.go:224] Setting cgroupDriver to cgroupfs
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.450199 27751 docker_legacy.go:151] No legacy containers found, stop performing legacy cleanup.
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.450243 27751 kubelet.go:606] Starting the GRPC server for the docker CRI shim.
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.450262 27751 docker_server.go:51] Start dockershim grpc server
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.452075 27751 oom_linux.go:65] attempting to set "/proc/1401/oom_score_adj" to "-999"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.452120 27751 oom_linux.go:65] attempting to set "/proc/1443/oom_score_adj" to "-999"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.505521 27751 remote_runtime.go:43] Connecting to runtime service unix:///var/run/dockershim.sock
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.505737 27751 remote_image.go:40] Connecting to image service unix:///var/run/dockershim.sock
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.505987 27751 plugins.go:56] Registering credential provider: .dockercfg
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.507418 27751 kuberuntime_manager.go:177] Container runtime docker initialized, version: 17.03.1-ce, apiVersion: 1.27.0
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.507676 27751 plugins.go:420] Loaded volume plugin "kubernetes.io/aws-ebs"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.507691 27751 plugins.go:420] Loaded volume plugin "kubernetes.io/empty-dir"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.507701 27751 plugins.go:420] Loaded volume plugin "kubernetes.io/gce-pd"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.507727 27751 plugins.go:420] Loaded volume plugin "kubernetes.io/git-repo"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.507738 27751 plugins.go:420] Loaded volume plugin "kubernetes.io/host-path"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.507747 27751 plugins.go:420] Loaded volume plugin "kubernetes.io/nfs"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.507756 27751 plugins.go:420] Loaded volume plugin "kubernetes.io/secret"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.507766 27751 plugins.go:420] Loaded volume plugin "kubernetes.io/iscsi"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.507776 27751 plugins.go:420] Loaded volume plugin "kubernetes.io/glusterfs"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.507785 27751 plugins.go:420] Loaded volume plugin "kubernetes.io/rbd"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.507794 27751 plugins.go:420] Loaded volume plugin "kubernetes.io/cinder"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.507803 27751 plugins.go:420] Loaded volume plugin "kubernetes.io/quobyte"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.507812 27751 plugins.go:420] Loaded volume plugin "kubernetes.io/cephfs"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.507823 27751 plugins.go:420] Loaded volume plugin "kubernetes.io/downward-api"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.507831 27751 plugins.go:420] Loaded volume plugin "kubernetes.io/fc"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.507840 27751 plugins.go:420] Loaded volume plugin "kubernetes.io/flocker"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.507849 27751 plugins.go:420] Loaded volume plugin "kubernetes.io/azure-file"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.507859 27751 plugins.go:420] Loaded volume plugin "kubernetes.io/configmap"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.507868 27751 plugins.go:420] Loaded volume plugin "kubernetes.io/vsphere-volume"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.507878 27751 plugins.go:420] Loaded volume plugin "kubernetes.io/azure-disk"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.507888 27751 plugins.go:420] Loaded volume plugin "kubernetes.io/photon-pd"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.507897 27751 plugins.go:420] Loaded volume plugin "kubernetes.io/projected"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.507906 27751 plugins.go:420] Loaded volume plugin "kubernetes.io/portworx-volume"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.507915 27751 plugins.go:420] Loaded volume plugin "kubernetes.io/scaleio"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.507959 27751 plugins.go:420] Loaded volume plugin "kubernetes.io/local-volume"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.507974 27751 plugins.go:420] Loaded volume plugin "kubernetes.io/storageos"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.509091 27751 server.go:718] Started kubelet v1.8.2
Nov 15 01:58:50 af867b kubelet[27751]: E1115 01:58:50.509665 27751 kubelet.go:1234] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.511538 27751 mount_linux.go:535] Directory /var/lib/kubelet is already on a shared mount
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.511660 27751 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Nov 15 01:58:50 af867b kubelet[27751]: E1115 01:58:50.511981 27751 event.go:209] Unable to write event: 'Post https://10.241.226.117:6443/api/v1/namespaces/default/events: dial tcp 10.241.226.117:6443: getsockopt: connection refused' (may retry after sleeping)
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.512496 27751 server.go:128] Starting to listen on 0.0.0.0:10250
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.512961 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'Starting' Starting kubelet.
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.512993 27751 server.go:148] Starting to listen read-only on 0.0.0.0:10255
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.514002 27751 server.go:296] Adding debug handlers to kubelet server.
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.522216 27751 kubelet.go:1222] Container garbage collection succeeded
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.522623 27751 kubelet_node_status.go:443] Recording NodeHasSufficientDisk event message for node af867b
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.522646 27751 kubelet_node_status.go:443] Recording NodeHasSufficientMemory event message for node af867b
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.522656 27751 kubelet_node_status.go:443] Recording NodeHasNoDiskPressure event message for node af867b
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.523136 27751 node_container_manager.go:70] Attempting to enforce Node Allocatable with config: {KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.523207 27751 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.523226 27751 status_manager.go:140] Starting to sync pod status with apiserver
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.523242 27751 kubelet.go:1768] Starting kubelet main sync loop.
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.523253 27751 kubelet.go:1779] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.523328 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientDisk' Node af867b status is now: NodeHasSufficientDisk
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.523343 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node af867b status is now: NodeHasSufficientMemory
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.523359 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node af867b status is now: NodeHasNoDiskPressure
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.523454 27751 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.523528 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeAllocatableEnforced' Updated Node Allocatable limit across pods
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.523550 27751 container_manager_linux.go:440] [ContainerManager]: Adding periodic tasks for docker CRI integration
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.523607 27751 container_manager_linux.go:446] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.523636 27751 oom_linux.go:65] attempting to set "/proc/27751/oom_score_adj" to "-999"
Nov 15 01:58:50 af867b kubelet[27751]: E1115 01:58:50.523724 27751 container_manager_linux.go:603] [ContainerManager]: Fail to get rootfs information unable to find data for container /
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.523755 27751 volume_manager.go:244] The desired_state_of_world populator starts
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.523760 27751 volume_manager.go:246] Starting Kubelet Volume Manager
Nov 15 01:58:50 af867b kubelet[27751]: W1115 01:58:50.525614 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.525906 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 01:58:50 af867b kubelet[27751]: E1115 01:58:50.525928 27751 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.627912 27751 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.631375 27751 kubelet_node_status.go:443] Recording NodeHasSufficientDisk event message for node af867b
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.631402 27751 kubelet_node_status.go:443] Recording NodeHasSufficientMemory event message for node af867b
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.631415 27751 kubelet_node_status.go:443] Recording NodeHasNoDiskPressure event message for node af867b
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.631439 27751 kubelet_node_status.go:83] Attempting to register node af867b
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.631747 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientDisk' Node af867b status is now: NodeHasSufficientDisk
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.631770 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node af867b status is now: NodeHasSufficientMemory
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.631784 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node af867b status is now: NodeHasNoDiskPressure
Nov 15 01:58:50 af867b kubelet[27751]: E1115 01:58:50.634356 27751 kubelet_node_status.go:107] Unable to register node "af867b" with API server: Post https://10.241.226.117:6443/api/v1/nodes: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:50 af867b kubelet[27751]: E1115 01:58:50.656590 27751 factory.go:340] devicemapper filesystem stats will not be reported: unable to find thin_ls binary
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.657816 27751 factory.go:355] Registering Docker factory
Nov 15 01:58:50 af867b kubelet[27751]: W1115 01:58:50.657844 27751 manager.go:265] Registration of the rkt container factory failed: unable to communicate with Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp [::1]:15441: getsockopt: connection refused
Nov 15 01:58:50 af867b kubelet[27751]: W1115 01:58:50.657951 27751 manager.go:276] Registration of the crio container factory failed: Get http://%2Fvar%2Frun%2Fcrio.sock/info: dial unix /var/run/crio.sock: connect: no such file or directory
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.657963 27751 factory.go:54] Registering systemd factory
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.658096 27751 factory.go:86] Registering Raw factory
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.658225 27751 manager.go:1140] Started watching for new ooms in manager
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.658286 27751 factory.go:116] Factory "docker" was unable to handle container "/"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.658305 27751 factory.go:105] Error trying to work out if we can handle /: / not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.658310 27751 factory.go:116] Factory "systemd" was unable to handle container "/"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.658318 27751 factory.go:112] Using factory "raw" for container "/"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.658989 27751 manager.go:932] Added container: "/" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.659274 27751 handler.go:325] Added event &{/ 2017-11-14 17:24:13.606 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.659306 27751 manager.go:311] Starting recovery of all containers
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.668163 27751 container.go:409] Start housekeeping for container "/"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.688233 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/lvm2-lvmetad.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.688279 27751 factory.go:105] Error trying to work out if we can handle /system.slice/lvm2-lvmetad.service: /system.slice/lvm2-lvmetad.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.688287 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/lvm2-lvmetad.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.688295 27751 factory.go:112] Using factory "raw" for container "/system.slice/lvm2-lvmetad.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.688444 27751 manager.go:932] Added container: "/system.slice/lvm2-lvmetad.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.688534 27751 handler.go:325] Added event &{/system.slice/lvm2-lvmetad.service 2017-11-14 17:38:36.664194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.688570 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/network.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.688583 27751 factory.go:105] Error trying to work out if we can handle /system.slice/network.service: /system.slice/network.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.688588 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/network.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.688594 27751 factory.go:112] Using factory "raw" for container "/system.slice/network.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.688700 27751 manager.go:932] Added container: "/system.slice/network.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.688885 27751 handler.go:325] Added event &{/system.slice/network.service 2017-11-14 17:38:36.665194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.688907 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/system-serial\\x2dgetty.slice"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.688916 27751 factory.go:105] Error trying to work out if we can handle /system.slice/system-serial\x2dgetty.slice: /system.slice/system-serial\x2dgetty.slice not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.688932 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/system-serial\\x2dgetty.slice"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.688940 27751 factory.go:112] Using factory "raw" for container "/system.slice/system-serial\\x2dgetty.slice"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.689055 27751 manager.go:932] Added container: "/system.slice/system-serial\\x2dgetty.slice" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.689146 27751 handler.go:325] Added event &{/system.slice/system-serial\x2dgetty.slice 2017-11-14 17:38:36.667194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.689171 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/systemd-sysctl.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.689179 27751 factory.go:105] Error trying to work out if we can handle /system.slice/systemd-sysctl.service: /system.slice/systemd-sysctl.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.689184 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/systemd-sysctl.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.689190 27751 factory.go:112] Using factory "raw" for container "/system.slice/systemd-sysctl.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.689319 27751 manager.go:932] Added container: "/system.slice/systemd-sysctl.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.689408 27751 handler.go:325] Added event &{/system.slice/systemd-sysctl.service 2017-11-14 17:38:36.668194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.689444 27751 factory.go:116] Factory "docker" was unable to handle container "/kubepods/burstable/pod9d6dd5e700f66143c0b1a919b27a8a33"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.689455 27751 factory.go:105] Error trying to work out if we can handle /kubepods/burstable/pod9d6dd5e700f66143c0b1a919b27a8a33: /kubepods/burstable/pod9d6dd5e700f66143c0b1a919b27a8a33 not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.689460 27751 factory.go:116] Factory "systemd" was unable to handle container "/kubepods/burstable/pod9d6dd5e700f66143c0b1a919b27a8a33"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.689467 27751 factory.go:112] Using factory "raw" for container "/kubepods/burstable/pod9d6dd5e700f66143c0b1a919b27a8a33"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.689606 27751 manager.go:932] Added container: "/kubepods/burstable/pod9d6dd5e700f66143c0b1a919b27a8a33" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.689733 27751 handler.go:325] Added event &{/kubepods/burstable/pod9d6dd5e700f66143c0b1a919b27a8a33 2017-11-14 17:38:41.589194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.689758 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/dev-disk-by\\x2did-dm\\x2duuid\\x2dLVM\\x2d5qQyVBIei1sAiW92atVQlKpHgr5hO0wRiOcalnY9G5qZcIpq1wnIC3VtjIEfyLcn.swap"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.689771 27751 factory.go:105] Error trying to work out if we can handle /system.slice/dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2d5qQyVBIei1sAiW92atVQlKpHgr5hO0wRiOcalnY9G5qZcIpq1wnIC3VtjIEfyLcn.swap: /system.slice/dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2d5qQyVBIei1sAiW92atVQlKpHgr5hO0wRiOcalnY9G5qZcIpq1wnIC3VtjIEfyLcn.swap not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.689782 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/dev-disk-by\\x2did-dm\\x2duuid\\x2dLVM\\x2d5qQyVBIei1sAiW92atVQlKpHgr5hO0wRiOcalnY9G5qZcIpq1wnIC3VtjIEfyLcn.swap"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.689791 27751 factory.go:112] Using factory "raw" for container "/system.slice/dev-disk-by\\x2did-dm\\x2duuid\\x2dLVM\\x2d5qQyVBIei1sAiW92atVQlKpHgr5hO0wRiOcalnY9G5qZcIpq1wnIC3VtjIEfyLcn.swap"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.689922 27751 manager.go:932] Added container: "/system.slice/dev-disk-by\\x2did-dm\\x2duuid\\x2dLVM\\x2d5qQyVBIei1sAiW92atVQlKpHgr5hO0wRiOcalnY9G5qZcIpq1wnIC3VtjIEfyLcn.swap" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.690030 27751 handler.go:325] Added event &{/system.slice/dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2d5qQyVBIei1sAiW92atVQlKpHgr5hO0wRiOcalnY9G5qZcIpq1wnIC3VtjIEfyLcn.swap 2017-11-14 17:38:36.661194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.690047 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/proc-xen.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.690053 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/proc-xen.mount", but ignoring.
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.690061 27751 manager.go:901] ignoring container "/system.slice/proc-xen.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.690076 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/system-selinux\\x2dpolicy\\x2dmigrate\\x2dlocal\\x2dchanges.slice"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.690085 27751 factory.go:105] Error trying to work out if we can handle /system.slice/system-selinux\x2dpolicy\x2dmigrate\x2dlocal\x2dchanges.slice: /system.slice/system-selinux\x2dpolicy\x2dmigrate\x2dlocal\x2dchanges.slice not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.690090 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/system-selinux\\x2dpolicy\\x2dmigrate\\x2dlocal\\x2dchanges.slice"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.690097 27751 factory.go:112] Using factory "raw" for container "/system.slice/system-selinux\\x2dpolicy\\x2dmigrate\\x2dlocal\\x2dchanges.slice"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.690214 27751 manager.go:932] Added container: "/system.slice/system-selinux\\x2dpolicy\\x2dmigrate\\x2dlocal\\x2dchanges.slice" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.690328 27751 handler.go:325] Added event &{/system.slice/system-selinux\x2dpolicy\x2dmigrate\x2dlocal\x2dchanges.slice 2017-11-14 17:38:36.667194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.690348 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/systemd-tmpfiles-setup.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.690356 27751 factory.go:105] Error trying to work out if we can handle /system.slice/systemd-tmpfiles-setup.service: /system.slice/systemd-tmpfiles-setup.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.690360 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/systemd-tmpfiles-setup.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.690366 27751 factory.go:112] Using factory "raw" for container "/system.slice/systemd-tmpfiles-setup.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.690492 27751 manager.go:932] Added container: "/system.slice/systemd-tmpfiles-setup.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.690591 27751 handler.go:325] Added event &{/system.slice/systemd-tmpfiles-setup.service 2017-11-14 17:38:36.668194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.690616 27751 factory.go:116] Factory "docker" was unable to handle container "/kubepods/besteffort/podb69bc062-c962-11e7-83ed-c6b053eac242"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.690624 27751 factory.go:105] Error trying to work out if we can handle /kubepods/besteffort/podb69bc062-c962-11e7-83ed-c6b053eac242: /kubepods/besteffort/podb69bc062-c962-11e7-83ed-c6b053eac242 not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.690629 27751 factory.go:116] Factory "systemd" was unable to handle container "/kubepods/besteffort/podb69bc062-c962-11e7-83ed-c6b053eac242"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.690634 27751 factory.go:112] Using factory "raw" for container "/kubepods/besteffort/podb69bc062-c962-11e7-83ed-c6b053eac242"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.690785 27751 manager.go:932] Added container: "/kubepods/besteffort/podb69bc062-c962-11e7-83ed-c6b053eac242" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.690891 27751 handler.go:325] Added event &{/kubepods/besteffort/podb69bc062-c962-11e7-83ed-c6b053eac242 2017-11-14 17:39:06.579194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.690911 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/system-getty.slice"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.690920 27751 factory.go:105] Error trying to work out if we can handle /system.slice/system-getty.slice: /system.slice/system-getty.slice not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.690925 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/system-getty.slice"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.690931 27751 factory.go:112] Using factory "raw" for container "/system.slice/system-getty.slice"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.691043 27751 manager.go:932] Added container: "/system.slice/system-getty.slice" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.691134 27751 handler.go:325] Added event &{/system.slice/system-getty.slice 2017-11-14 17:38:36.666194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.691152 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/systemd-udevd.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.691160 27751 factory.go:105] Error trying to work out if we can handle /system.slice/systemd-udevd.service: /system.slice/systemd-udevd.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.691164 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/systemd-udevd.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.691170 27751 factory.go:112] Using factory "raw" for container "/system.slice/systemd-udevd.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.691287 27751 manager.go:932] Added container: "/system.slice/systemd-udevd.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.691399 27751 handler.go:325] Added event &{/system.slice/systemd-udevd.service 2017-11-14 17:38:36.668194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.691414 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.691421 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper.mount", but ignoring.
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.691429 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.691451 27751 factory.go:116] Factory "docker" was unable to handle container "/kubepods/besteffort/podd76e26fba3bf2bfd215eb29011d55250"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.691458 27751 factory.go:105] Error trying to work out if we can handle /kubepods/besteffort/podd76e26fba3bf2bfd215eb29011d55250: /kubepods/besteffort/podd76e26fba3bf2bfd215eb29011d55250 not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.691463 27751 factory.go:116] Factory "systemd" was unable to handle container "/kubepods/besteffort/podd76e26fba3bf2bfd215eb29011d55250"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.691468 27751 factory.go:112] Using factory "raw" for container "/kubepods/besteffort/podd76e26fba3bf2bfd215eb29011d55250"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.691592 27751 manager.go:932] Added container: "/kubepods/besteffort/podd76e26fba3bf2bfd215eb29011d55250" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.691700 27751 handler.go:325] Added event &{/kubepods/besteffort/podd76e26fba3bf2bfd215eb29011d55250 2017-11-14 17:38:41.583194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.691735 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/boot.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.691743 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/boot.mount", but ignoring.
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.691750 27751 manager.go:901] ignoring container "/system.slice/boot.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.691758 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/chronyd.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.691765 27751 factory.go:105] Error trying to work out if we can handle /system.slice/chronyd.service: /system.slice/chronyd.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.691770 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/chronyd.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.691776 27751 factory.go:112] Using factory "raw" for container "/system.slice/chronyd.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.691905 27751 manager.go:932] Added container: "/system.slice/chronyd.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.692003 27751 handler.go:325] Added event &{/system.slice/chronyd.service 2017-11-14 17:38:36.660194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.692030 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/dev-disk-by\\x2duuid-bd4931fb\\x2dde32\\x2d4ef5\\x2d9b28\\x2d80c248c5732b.swap"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.692042 27751 factory.go:105] Error trying to work out if we can handle /system.slice/dev-disk-by\x2duuid-bd4931fb\x2dde32\x2d4ef5\x2d9b28\x2d80c248c5732b.swap: /system.slice/dev-disk-by\x2duuid-bd4931fb\x2dde32\x2d4ef5\x2d9b28\x2d80c248c5732b.swap not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.692049 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/dev-disk-by\\x2duuid-bd4931fb\\x2dde32\\x2d4ef5\\x2d9b28\\x2d80c248c5732b.swap"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.692057 27751 factory.go:112] Using factory "raw" for container "/system.slice/dev-disk-by\\x2duuid-bd4931fb\\x2dde32\\x2d4ef5\\x2d9b28\\x2d80c248c5732b.swap"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.692178 27751 manager.go:932] Added container: "/system.slice/dev-disk-by\\x2duuid-bd4931fb\\x2dde32\\x2d4ef5\\x2d9b28\\x2d80c248c5732b.swap" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.692276 27751 handler.go:325] Added event &{/system.slice/dev-disk-by\x2duuid-bd4931fb\x2dde32\x2d4ef5\x2d9b28\x2d80c248c5732b.swap 2017-11-14 17:38:36.661194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.692312 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/systemd-logind.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.692321 27751 factory.go:105] Error trying to work out if we can handle /system.slice/systemd-logind.service: /system.slice/systemd-logind.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.692326 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/systemd-logind.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.692332 27751 factory.go:112] Using factory "raw" for container "/system.slice/systemd-logind.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.692443 27751 manager.go:932] Added container: "/system.slice/systemd-logind.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.692546 27751 handler.go:325] Added event &{/system.slice/systemd-logind.service 2017-11-14 17:38:36.667194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.692565 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/systemd-user-sessions.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.692573 27751 factory.go:105] Error trying to work out if we can handle /system.slice/systemd-user-sessions.service: /system.slice/systemd-user-sessions.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.692578 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/systemd-user-sessions.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.692584 27751 factory.go:112] Using factory "raw" for container "/system.slice/systemd-user-sessions.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.692697 27751 manager.go:932] Added container: "/system.slice/systemd-user-sessions.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.692765 27751 container.go:409] Start housekeeping for container "/system.slice/system-selinux\\x2dpolicy\\x2dmigrate\\x2dlocal\\x2dchanges.slice"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.693195 27751 container.go:409] Start housekeeping for container "/system.slice/lvm2-lvmetad.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.693595 27751 container.go:409] Start housekeeping for container "/system.slice/network.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.694016 27751 container.go:409] Start housekeeping for container "/system.slice/system-serial\\x2dgetty.slice"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.694356 27751 container.go:409] Start housekeeping for container "/system.slice/systemd-sysctl.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.694722 27751 container.go:409] Start housekeeping for container "/kubepods/burstable/pod9d6dd5e700f66143c0b1a919b27a8a33"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.695126 27751 container.go:409] Start housekeeping for container "/system.slice/dev-disk-by\\x2did-dm\\x2duuid\\x2dLVM\\x2d5qQyVBIei1sAiW92atVQlKpHgr5hO0wRiOcalnY9G5qZcIpq1wnIC3VtjIEfyLcn.swap"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.695488 27751 container.go:409] Start housekeeping for container "/system.slice/systemd-udevd.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.695948 27751 container.go:409] Start housekeeping for container "/system.slice/systemd-tmpfiles-setup.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.696329 27751 container.go:409] Start housekeeping for container "/kubepods/besteffort/podb69bc062-c962-11e7-83ed-c6b053eac242"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.696852 27751 container.go:409] Start housekeeping for container "/system.slice/system-getty.slice"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.697217 27751 container.go:409] Start housekeeping for container "/system.slice/chronyd.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.697591 27751 container.go:409] Start housekeeping for container "/kubepods/besteffort/podd76e26fba3bf2bfd215eb29011d55250"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.698064 27751 container.go:409] Start housekeeping for container "/system.slice/dev-disk-by\\x2duuid-bd4931fb\\x2dde32\\x2d4ef5\\x2d9b28\\x2d80c248c5732b.swap"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.698566 27751 handler.go:325] Added event &{/system.slice/systemd-user-sessions.service 2017-11-14 17:38:36.669194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.698587 27751 factory.go:116] Factory "docker" was unable to handle container "/kubepods"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.698595 27751 factory.go:105] Error trying to work out if we can handle /kubepods: /kubepods not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.698600 27751 factory.go:116] Factory "systemd" was unable to handle container "/kubepods"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.698605 27751 factory.go:112] Using factory "raw" for container "/kubepods"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.698735 27751 manager.go:932] Added container: "/kubepods" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.698848 27751 handler.go:325] Added event &{/kubepods 2017-11-14 17:38:36.658194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.698867 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.698875 27751 factory.go:105] Error trying to work out if we can handle /system.slice: /system.slice not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.698880 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.698884 27751 factory.go:112] Using factory "raw" for container "/system.slice"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.698992 27751 manager.go:932] Added container: "/system.slice" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.699077 27751 handler.go:325] Added event &{/system.slice 2017-11-14 17:38:36.659194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.699094 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/dev-dm\\x2d1.swap"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.699101 27751 factory.go:105] Error trying to work out if we can handle /system.slice/dev-dm\x2d1.swap: /system.slice/dev-dm\x2d1.swap not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.699106 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/dev-dm\\x2d1.swap"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.699111 27751 factory.go:112] Using factory "raw" for container "/system.slice/dev-dm\\x2d1.swap"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.699229 27751 manager.go:932] Added container: "/system.slice/dev-dm\\x2d1.swap" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.699314 27751 handler.go:325] Added event &{/system.slice/dev-dm\x2d1.swap 2017-11-14 17:38:36.662194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.699331 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/docker.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.699338 27751 factory.go:105] Error trying to work out if we can handle /system.slice/docker.service: /system.slice/docker.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.699343 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/docker.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.699349 27751 factory.go:112] Using factory "raw" for container "/system.slice/docker.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.699457 27751 manager.go:932] Added container: "/system.slice/docker.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.699553 27751 handler.go:325] Added event &{/system.slice/docker.service 2017-11-14 17:38:36.663194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.699568 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/sys-kernel-debug.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.699574 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/sys-kernel-debug.mount", but ignoring.
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.699581 27751 manager.go:901] ignoring container "/system.slice/sys-kernel-debug.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.699587 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/u01-applicationSpace.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.699592 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/u01-applicationSpace.mount", but ignoring.
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.699600 27751 manager.go:901] ignoring container "/system.slice/u01-applicationSpace.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.699605 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/run-docker-netns-default.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.699610 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/run-docker-netns-default.mount", but ignoring.
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.699617 27751 manager.go:901] ignoring container "/system.slice/run-docker-netns-default.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.699627 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/systemd-update-utmp.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.699634 27751 factory.go:105] Error trying to work out if we can handle /system.slice/systemd-update-utmp.service: /system.slice/systemd-update-utmp.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.699652 27751 container.go:409] Start housekeeping for container "/system.slice/docker.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.700108 27751 container.go:409] Start housekeeping for container "/system.slice/systemd-logind.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.700489 27751 container.go:409] Start housekeeping for container "/system.slice/systemd-user-sessions.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.701250 27751 container.go:409] Start housekeeping for container "/system.slice"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.701936 27751 container.go:409] Start housekeeping for container "/kubepods"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.702372 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/systemd-update-utmp.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.702383 27751 factory.go:112] Using factory "raw" for container "/system.slice/systemd-update-utmp.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.702507 27751 manager.go:932] Added container: "/system.slice/systemd-update-utmp.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.702601 27751 handler.go:325] Added event &{/system.slice/systemd-update-utmp.service 2017-11-14 17:38:36.669194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.702617 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/-.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.702624 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/-.mount", but ignoring.
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.702631 27751 manager.go:901] ignoring container "/system.slice/-.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.702639 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/crond.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.702646 27751 factory.go:105] Error trying to work out if we can handle /system.slice/crond.service: /system.slice/crond.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.702651 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/crond.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.702656 27751 factory.go:112] Using factory "raw" for container "/system.slice/crond.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.702791 27751 manager.go:932] Added container: "/system.slice/crond.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.702877 27751 handler.go:325] Added event &{/system.slice/crond.service 2017-11-14 17:38:36.660194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.702896 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/kmod-static-nodes.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.702904 27751 factory.go:105] Error trying to work out if we can handle /system.slice/kmod-static-nodes.service: /system.slice/kmod-static-nodes.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.702909 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/kmod-static-nodes.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.702915 27751 factory.go:112] Using factory "raw" for container "/system.slice/kmod-static-nodes.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.703036 27751 manager.go:932] Added container: "/system.slice/kmod-static-nodes.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.703120 27751 handler.go:325] Added event &{/system.slice/kmod-static-nodes.service 2017-11-14 17:38:36.663194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.703144 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/rhel-readonly.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.703152 27751 factory.go:105] Error trying to work out if we can handle /system.slice/rhel-readonly.service: /system.slice/rhel-readonly.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.703157 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/rhel-readonly.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.703162 27751 factory.go:112] Using factory "raw" for container "/system.slice/rhel-readonly.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.703277 27751 manager.go:932] Added container: "/system.slice/rhel-readonly.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.703363 27751 handler.go:325] Added event &{/system.slice/rhel-readonly.service 2017-11-14 17:38:36.665194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.703384 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/dev-disk-by\\x2did-dm\\x2dname\\x2dvg_main\\x2dlv_swap.swap"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.703393 27751 factory.go:105] Error trying to work out if we can handle /system.slice/dev-disk-by\x2did-dm\x2dname\x2dvg_main\x2dlv_swap.swap: /system.slice/dev-disk-by\x2did-dm\x2dname\x2dvg_main\x2dlv_swap.swap not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.703397 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/dev-disk-by\\x2did-dm\\x2dname\\x2dvg_main\\x2dlv_swap.swap"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.703405 27751 factory.go:112] Using factory "raw" for container "/system.slice/dev-disk-by\\x2did-dm\\x2dname\\x2dvg_main\\x2dlv_swap.swap"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.703528 27751 manager.go:932] Added container: "/system.slice/dev-disk-by\\x2did-dm\\x2dname\\x2dvg_main\\x2dlv_swap.swap" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.703621 27751 handler.go:325] Added event &{/system.slice/dev-disk-by\x2did-dm\x2dname\x2dvg_main\x2dlv_swap.swap 2017-11-14 17:38:36.661194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.703635 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/dev-hugepages.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.703641 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/dev-hugepages.mount", but ignoring.
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.703649 27751 manager.go:901] ignoring container "/system.slice/dev-hugepages.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.703657 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/kubelet.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.703664 27751 factory.go:105] Error trying to work out if we can handle /system.slice/kubelet.service: /system.slice/kubelet.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.703669 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/kubelet.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.703674 27751 factory.go:112] Using factory "raw" for container "/system.slice/kubelet.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.703810 27751 manager.go:932] Added container: "/system.slice/kubelet.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.703901 27751 handler.go:325] Added event &{/system.slice/kubelet.service 2017-11-15 01:58:50.660194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.703918 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/sshd.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.703925 27751 factory.go:105] Error trying to work out if we can handle /system.slice/sshd.service: /system.slice/sshd.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.703930 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/sshd.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.703935 27751 factory.go:112] Using factory "raw" for container "/system.slice/sshd.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.704052 27751 manager.go:932] Added container: "/system.slice/sshd.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.704133 27751 handler.go:325] Added event &{/system.slice/sshd.service 2017-11-14 17:38:36.666194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.704150 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/systemd-fsck-root.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.704158 27751 factory.go:105] Error trying to work out if we can handle /system.slice/systemd-fsck-root.service: /system.slice/systemd-fsck-root.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.704163 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/systemd-fsck-root.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.704168 27751 factory.go:112] Using factory "raw" for container "/system.slice/systemd-fsck-root.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.704294 27751 manager.go:932] Added container: "/system.slice/systemd-fsck-root.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.704380 27751 handler.go:325] Added event &{/system.slice/systemd-fsck-root.service 2017-11-14 17:38:36.667194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.704397 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/systemd-random-seed.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.704406 27751 factory.go:105] Error trying to work out if we can handle /system.slice/systemd-random-seed.service: /system.slice/systemd-random-seed.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.704410 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/systemd-random-seed.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.704416 27751 factory.go:112] Using factory "raw" for container "/system.slice/systemd-random-seed.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.704527 27751 manager.go:932] Added container: "/system.slice/systemd-random-seed.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.704615 27751 handler.go:325] Added event &{/system.slice/systemd-random-seed.service 2017-11-14 17:38:36.668194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.704632 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/tuned.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.704639 27751 factory.go:105] Error trying to work out if we can handle /system.slice/tuned.service: /system.slice/tuned.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.704644 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/tuned.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.704650 27751 factory.go:112] Using factory "raw" for container "/system.slice/tuned.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.704890 27751 manager.go:932] Added container: "/system.slice/tuned.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.704984 27751 handler.go:325] Added event &{/system.slice/tuned.service 2017-11-14 17:38:36.669194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.705001 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/acpid.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.705009 27751 factory.go:105] Error trying to work out if we can handle /system.slice/acpid.service: /system.slice/acpid.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.705013 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/acpid.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.705018 27751 factory.go:112] Using factory "raw" for container "/system.slice/acpid.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.705138 27751 manager.go:932] Added container: "/system.slice/acpid.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.705226 27751 handler.go:325] Added event &{/system.slice/acpid.service 2017-11-14 17:38:36.660194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.705243 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/dev-vg_main-lv_swap.swap"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.705251 27751 factory.go:105] Error trying to work out if we can handle /system.slice/dev-vg_main-lv_swap.swap: /system.slice/dev-vg_main-lv_swap.swap not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.705257 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/dev-vg_main-lv_swap.swap"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.705263 27751 factory.go:112] Using factory "raw" for container "/system.slice/dev-vg_main-lv_swap.swap"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.705374 27751 manager.go:932] Added container: "/system.slice/dev-vg_main-lv_swap.swap" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.705459 27751 handler.go:325] Added event &{/system.slice/dev-vg_main-lv_swap.swap 2017-11-14 17:38:36.662194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.705478 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/systemd-journal-flush.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.705490 27751 factory.go:105] Error trying to work out if we can handle /system.slice/systemd-journal-flush.service: /system.slice/systemd-journal-flush.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.705496 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/systemd-journal-flush.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.705501 27751 factory.go:112] Using factory "raw" for container "/system.slice/systemd-journal-flush.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.705618 27751 manager.go:932] Added container: "/system.slice/systemd-journal-flush.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.705703 27751 handler.go:325] Added event &{/system.slice/systemd-journal-flush.service 2017-11-14 17:38:36.667194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.705754 27751 factory.go:116] Factory "docker" was unable to handle container "/kubepods/burstable/pod42253414d7c5f285b756a2243a4df250"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.705766 27751 factory.go:105] Error trying to work out if we can handle /kubepods/burstable/pod42253414d7c5f285b756a2243a4df250: /kubepods/burstable/pod42253414d7c5f285b756a2243a4df250 not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.705771 27751 factory.go:116] Factory "systemd" was unable to handle container "/kubepods/burstable/pod42253414d7c5f285b756a2243a4df250"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.705777 27751 factory.go:112] Using factory "raw" for container "/kubepods/burstable/pod42253414d7c5f285b756a2243a4df250"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.705911 27751 manager.go:932] Added container: "/kubepods/burstable/pod42253414d7c5f285b756a2243a4df250" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.706021 27751 handler.go:325] Added event &{/kubepods/burstable/pod42253414d7c5f285b756a2243a4df250 2017-11-14 17:38:41.571194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.706039 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/dm-event.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.706046 27751 factory.go:105] Error trying to work out if we can handle /system.slice/dm-event.service: /system.slice/dm-event.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.706050 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/dm-event.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.706056 27751 factory.go:112] Using factory "raw" for container "/system.slice/dm-event.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.706170 27751 manager.go:932] Added container: "/system.slice/dm-event.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.706256 27751 handler.go:325] Added event &{/system.slice/dm-event.service 2017-11-14 17:38:36.663194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.706278 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/opc-guest-agent.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.706285 27751 factory.go:105] Error trying to work out if we can handle /system.slice/opc-guest-agent.service: /system.slice/opc-guest-agent.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.706290 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/opc-guest-agent.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.706295 27751 factory.go:112] Using factory "raw" for container "/system.slice/opc-guest-agent.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.706402 27751 manager.go:932] Added container: "/system.slice/opc-guest-agent.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.706488 27751 handler.go:325] Added event &{/system.slice/opc-guest-agent.service 2017-11-14 17:38:36.665194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.706505 27751 factory.go:116] Factory "docker" was unable to handle container "/kubepods/burstable"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.706512 27751 factory.go:105] Error trying to work out if we can handle /kubepods/burstable: /kubepods/burstable not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.706517 27751 factory.go:116] Factory "systemd" was unable to handle container "/kubepods/burstable"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.706522 27751 factory.go:112] Using factory "raw" for container "/kubepods/burstable"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.706650 27751 manager.go:932] Added container: "/kubepods/burstable" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.706764 27751 handler.go:325] Added event &{/kubepods/burstable 2017-11-14 17:38:36.658194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.706785 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/dbus.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.706792 27751 factory.go:105] Error trying to work out if we can handle /system.slice/dbus.service: /system.slice/dbus.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.706797 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/dbus.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.706802 27751 factory.go:112] Using factory "raw" for container "/system.slice/dbus.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.706916 27751 manager.go:932] Added container: "/system.slice/dbus.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.707009 27751 handler.go:325] Added event &{/system.slice/dbus.service 2017-11-14 17:38:36.660194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.707033 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/irqbalance.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.707041 27751 factory.go:105] Error trying to work out if we can handle /system.slice/irqbalance.service: /system.slice/irqbalance.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.707046 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/irqbalance.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.707052 27751 factory.go:112] Using factory "raw" for container "/system.slice/irqbalance.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.707158 27751 manager.go:932] Added container: "/system.slice/irqbalance.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.707251 27751 handler.go:325] Added event &{/system.slice/irqbalance.service 2017-11-14 17:38:36.663194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.707267 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/lvm2-monitor.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.707275 27751 factory.go:105] Error trying to work out if we can handle /system.slice/lvm2-monitor.service: /system.slice/lvm2-monitor.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.707280 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/lvm2-monitor.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.707285 27751 factory.go:112] Using factory "raw" for container "/system.slice/lvm2-monitor.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.707403 27751 manager.go:932] Added container: "/system.slice/lvm2-monitor.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.707487 27751 handler.go:325] Added event &{/system.slice/lvm2-monitor.service 2017-11-14 17:38:36.665194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.707504 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/serial_console.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.707512 27751 factory.go:105] Error trying to work out if we can handle /system.slice/serial_console.service: /system.slice/serial_console.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.707517 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/serial_console.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.707522 27751 factory.go:112] Using factory "raw" for container "/system.slice/serial_console.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.707647 27751 manager.go:932] Added container: "/system.slice/serial_console.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.707756 27751 handler.go:325] Added event &{/system.slice/serial_console.service 2017-11-14 17:38:36.666194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.707776 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/systemd-remount-fs.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.707784 27751 factory.go:105] Error trying to work out if we can handle /system.slice/systemd-remount-fs.service: /system.slice/systemd-remount-fs.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.707789 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/systemd-remount-fs.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.707794 27751 factory.go:112] Using factory "raw" for container "/system.slice/systemd-remount-fs.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.707914 27751 manager.go:932] Added container: "/system.slice/systemd-remount-fs.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.708000 27751 handler.go:325] Added event &{/system.slice/systemd-remount-fs.service 2017-11-14 17:38:36.668194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.708022 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/systemd-vconsole-setup.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.708031 27751 factory.go:105] Error trying to work out if we can handle /system.slice/systemd-vconsole-setup.service: /system.slice/systemd-vconsole-setup.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.708035 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/systemd-vconsole-setup.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.708042 27751 factory.go:112] Using factory "raw" for container "/system.slice/systemd-vconsole-setup.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.708160 27751 manager.go:932] Added container: "/system.slice/systemd-vconsole-setup.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.708253 27751 handler.go:325] Added event &{/system.slice/systemd-vconsole-setup.service 2017-11-14 17:38:36.669194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.708269 27751 factory.go:116] Factory "docker" was unable to handle container "/user.slice"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.708276 27751 factory.go:105] Error trying to work out if we can handle /user.slice: /user.slice not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.708281 27751 factory.go:116] Factory "systemd" was unable to handle container "/user.slice"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.708286 27751 factory.go:112] Using factory "raw" for container "/user.slice"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.708393 27751 manager.go:932] Added container: "/user.slice" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.708475 27751 handler.go:325] Added event &{/user.slice 2017-11-14 17:38:36.670194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.708494 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/dev-mapper-vg_main\\x2dlv_swap.swap"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.708504 27751 factory.go:105] Error trying to work out if we can handle /system.slice/dev-mapper-vg_main\x2dlv_swap.swap: /system.slice/dev-mapper-vg_main\x2dlv_swap.swap not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.708509 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/dev-mapper-vg_main\\x2dlv_swap.swap"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.708515 27751 factory.go:112] Using factory "raw" for container "/system.slice/dev-mapper-vg_main\\x2dlv_swap.swap"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.708632 27751 manager.go:932] Added container: "/system.slice/dev-mapper-vg_main\\x2dlv_swap.swap" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.708926 27751 container.go:409] Start housekeeping for container "/system.slice/dev-dm\\x2d1.swap"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.709277 27751 container.go:409] Start housekeeping for container "/system.slice/dev-vg_main-lv_swap.swap"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.709645 27751 container.go:409] Start housekeeping for container "/system.slice/systemd-update-utmp.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.710030 27751 container.go:409] Start housekeeping for container "/system.slice/crond.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.710410 27751 container.go:409] Start housekeeping for container "/system.slice/kmod-static-nodes.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.710785 27751 container.go:409] Start housekeeping for container "/system.slice/rhel-readonly.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.711116 27751 container.go:409] Start housekeeping for container "/system.slice/dev-disk-by\\x2did-dm\\x2dname\\x2dvg_main\\x2dlv_swap.swap"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.711470 27751 container.go:409] Start housekeeping for container "/system.slice/kubelet.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.711871 27751 container.go:409] Start housekeeping for container "/system.slice/sshd.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.712251 27751 container.go:409] Start housekeeping for container "/system.slice/systemd-fsck-root.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.712609 27751 container.go:409] Start housekeeping for container "/system.slice/systemd-random-seed.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.712787 27751 handler.go:325] Added event &{/system.slice/dev-mapper-vg_main\x2dlv_swap.swap 2017-11-14 17:38:36.662194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.712809 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/rsyslog.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.712817 27751 factory.go:105] Error trying to work out if we can handle /system.slice/rsyslog.service: /system.slice/rsyslog.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.712822 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/rsyslog.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.712827 27751 factory.go:112] Using factory "raw" for container "/system.slice/rsyslog.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.712944 27751 manager.go:932] Added container: "/system.slice/rsyslog.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.713029 27751 handler.go:325] Added event &{/system.slice/rsyslog.service 2017-11-14 17:38:36.665194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.713045 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/run-user-1000.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.713051 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/run-user-1000.mount", but ignoring.
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.713058 27751 manager.go:901] ignoring container "/system.slice/run-user-1000.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.713068 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/systemd-udev-trigger.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.713075 27751 factory.go:105] Error trying to work out if we can handle /system.slice/systemd-udev-trigger.service: /system.slice/systemd-udev-trigger.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.713080 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/systemd-udev-trigger.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.713085 27751 factory.go:112] Using factory "raw" for container "/system.slice/systemd-udev-trigger.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.713202 27751 manager.go:932] Added container: "/system.slice/systemd-udev-trigger.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.713296 27751 handler.go:325] Added event &{/system.slice/systemd-udev-trigger.service 2017-11-14 17:38:36.668194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.713314 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/systemd-journald.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.713322 27751 factory.go:105] Error trying to work out if we can handle /system.slice/systemd-journald.service: /system.slice/systemd-journald.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.713326 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/systemd-journald.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.713332 27751 factory.go:112] Using factory "raw" for container "/system.slice/systemd-journald.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.713446 27751 manager.go:932] Added container: "/system.slice/systemd-journald.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.713539 27751 handler.go:325] Added event &{/system.slice/systemd-journald.service 2017-11-14 17:38:36.667194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.713558 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/systemd-tmpfiles-setup-dev.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.713566 27751 factory.go:105] Error trying to work out if we can handle /system.slice/systemd-tmpfiles-setup-dev.service: /system.slice/systemd-tmpfiles-setup-dev.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.713570 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/systemd-tmpfiles-setup-dev.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.713577 27751 factory.go:112] Using factory "raw" for container "/system.slice/systemd-tmpfiles-setup-dev.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.713690 27751 manager.go:932] Added container: "/system.slice/systemd-tmpfiles-setup-dev.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.713830 27751 handler.go:325] Added event &{/system.slice/systemd-tmpfiles-setup-dev.service 2017-11-14 17:38:36.668194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.713848 27751 factory.go:116] Factory "docker" was unable to handle container "/kubepods/besteffort"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.713856 27751 factory.go:105] Error trying to work out if we can handle /kubepods/besteffort: /kubepods/besteffort not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.713860 27751 factory.go:116] Factory "systemd" was unable to handle container "/kubepods/besteffort"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.713866 27751 factory.go:112] Using factory "raw" for container "/kubepods/besteffort"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.713976 27751 manager.go:932] Added container: "/kubepods/besteffort" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.714077 27751 handler.go:325] Added event &{/kubepods/besteffort 2017-11-14 17:38:36.658194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.714091 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/dev-mqueue.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.714097 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/dev-mqueue.mount", but ignoring.
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.714104 27751 manager.go:901] ignoring container "/system.slice/dev-mqueue.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.714113 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/rhel-dmesg.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.714120 27751 factory.go:105] Error trying to work out if we can handle /system.slice/rhel-dmesg.service: /system.slice/rhel-dmesg.service not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.714124 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/rhel-dmesg.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.714129 27751 factory.go:112] Using factory "raw" for container "/system.slice/rhel-dmesg.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.714240 27751 manager.go:932] Added container: "/system.slice/rhel-dmesg.service" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.714331 27751 handler.go:325] Added event &{/system.slice/rhel-dmesg.service 2017-11-14 17:38:36.665194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.714346 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/sys-kernel-config.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.714352 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/sys-kernel-config.mount", but ignoring.
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.714359 27751 manager.go:901] ignoring container "/system.slice/sys-kernel-config.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.714369 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/system-lvm2\\x2dpvscan.slice"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.714376 27751 factory.go:105] Error trying to work out if we can handle /system.slice/system-lvm2\x2dpvscan.slice: /system.slice/system-lvm2\x2dpvscan.slice not handled by systemd handler
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.714381 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/system-lvm2\\x2dpvscan.slice"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.714386 27751 factory.go:112] Using factory "raw" for container "/system.slice/system-lvm2\\x2dpvscan.slice"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.714505 27751 manager.go:932] Added container: "/system.slice/system-lvm2\\x2dpvscan.slice" (aliases: [], namespace: "")
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.714603 27751 handler.go:325] Added event &{/system.slice/system-lvm2\x2dpvscan.slice 2017-11-14 17:38:36.666194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.714617 27751 manager.go:316] Recovery completed
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.720792 27751 container.go:409] Start housekeeping for container "/system.slice/system-lvm2\\x2dpvscan.slice"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.721340 27751 container.go:409] Start housekeeping for container "/system.slice/systemd-journal-flush.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.721800 27751 container.go:409] Start housekeeping for container "/kubepods/burstable/pod42253414d7c5f285b756a2243a4df250"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.722333 27751 container.go:409] Start housekeeping for container "/system.slice/dm-event.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.722893 27751 container.go:409] Start housekeeping for container "/system.slice/opc-guest-agent.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.723385 27751 container.go:409] Start housekeeping for container "/kubepods/burstable"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.723986 27751 container.go:409] Start housekeeping for container "/system.slice/dbus.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.726465 27751 container.go:409] Start housekeeping for container "/system.slice/irqbalance.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.727009 27751 container.go:409] Start housekeeping for container "/system.slice/lvm2-monitor.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.727473 27751 container.go:409] Start housekeeping for container "/system.slice/serial_console.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.727992 27751 container.go:409] Start housekeeping for container "/system.slice/systemd-remount-fs.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.728558 27751 container.go:409] Start housekeeping for container "/system.slice/systemd-vconsole-setup.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.728848 27751 container.go:409] Start housekeeping for container "/system.slice/tuned.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.729248 27751 container.go:409] Start housekeeping for container "/system.slice/acpid.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.732742 27751 container.go:409] Start housekeeping for container "/user.slice"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.733135 27751 container.go:409] Start housekeeping for container "/system.slice/dev-mapper-vg_main\\x2dlv_swap.swap"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.733460 27751 container.go:409] Start housekeeping for container "/system.slice/rsyslog.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.733835 27751 container.go:409] Start housekeeping for container "/system.slice/systemd-udev-trigger.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.734167 27751 container.go:409] Start housekeeping for container "/system.slice/systemd-journald.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.734531 27751 container.go:409] Start housekeeping for container "/system.slice/systemd-tmpfiles-setup-dev.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.734885 27751 container.go:409] Start housekeeping for container "/kubepods/besteffort"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.735288 27751 container.go:409] Start housekeeping for container "/system.slice/rhel-dmesg.service"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.791991 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/boot.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792033 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/boot.mount", but ignoring.
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792045 27751 manager.go:901] ignoring container "/system.slice/boot.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792053 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792062 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper.mount", but ignoring.
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792071 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792079 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/sys-kernel-debug.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792086 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/sys-kernel-debug.mount", but ignoring.
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792095 27751 manager.go:901] ignoring container "/system.slice/sys-kernel-debug.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792102 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/u01-applicationSpace.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792109 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/u01-applicationSpace.mount", but ignoring.
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792118 27751 manager.go:901] ignoring container "/system.slice/u01-applicationSpace.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792125 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/-.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792132 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/-.mount", but ignoring.
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792141 27751 manager.go:901] ignoring container "/system.slice/-.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792147 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/dev-hugepages.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792155 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/dev-hugepages.mount", but ignoring.
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792163 27751 manager.go:901] ignoring container "/system.slice/dev-hugepages.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792171 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/proc-xen.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792179 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/proc-xen.mount", but ignoring.
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792188 27751 manager.go:901] ignoring container "/system.slice/proc-xen.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792194 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/run-docker-netns-default.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792202 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/run-docker-netns-default.mount", but ignoring.
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792210 27751 manager.go:901] ignoring container "/system.slice/run-docker-netns-default.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792217 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/dev-mqueue.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792224 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/dev-mqueue.mount", but ignoring.
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792233 27751 manager.go:901] ignoring container "/system.slice/dev-mqueue.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792240 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/run-user-1000.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792248 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/run-user-1000.mount", but ignoring.
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792256 27751 manager.go:901] ignoring container "/system.slice/run-user-1000.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792263 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/sys-kernel-config.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792270 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/sys-kernel-config.mount", but ignoring.
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792280 27751 manager.go:901] ignoring container "/system.slice/sys-kernel-config.mount"
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.792338 27751 eviction_manager.go:221] eviction manager: synchronize housekeeping
Nov 15 01:58:50 af867b kubelet[27751]: E1115 01:58:50.792446 27751 eviction_manager.go:238] eviction manager: unexpected err: failed to get node info: node 'af867b' not found
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.834549 27751 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.837086 27751 kubelet_node_status.go:443] Recording NodeHasSufficientDisk event message for node af867b
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.837113 27751 kubelet_node_status.go:443] Recording NodeHasSufficientMemory event message for node af867b
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.837125 27751 kubelet_node_status.go:443] Recording NodeHasNoDiskPressure event message for node af867b
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.837141 27751 kubelet_node_status.go:83] Attempting to register node af867b
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.837408 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientDisk' Node af867b status is now: NodeHasSufficientDisk
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.837434 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node af867b status is now: NodeHasSufficientMemory
Nov 15 01:58:50 af867b kubelet[27751]: I1115 01:58:50.837448 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node af867b status is now: NodeHasNoDiskPressure
Nov 15 01:58:50 af867b kubelet[27751]: E1115 01:58:50.839121 27751 kubelet_node_status.go:107] Unable to register node "af867b" with API server: Post https://10.241.226.117:6443/api/v1/nodes: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:51 af867b kubelet[27751]: I1115 01:58:51.239355 27751 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Nov 15 01:58:51 af867b kubelet[27751]: I1115 01:58:51.244096 27751 kubelet_node_status.go:443] Recording NodeHasSufficientDisk event message for node af867b
Nov 15 01:58:51 af867b kubelet[27751]: I1115 01:58:51.244147 27751 kubelet_node_status.go:443] Recording NodeHasSufficientMemory event message for node af867b
Nov 15 01:58:51 af867b kubelet[27751]: I1115 01:58:51.244160 27751 kubelet_node_status.go:443] Recording NodeHasNoDiskPressure event message for node af867b
Nov 15 01:58:51 af867b kubelet[27751]: I1115 01:58:51.244180 27751 kubelet_node_status.go:83] Attempting to register node af867b
Nov 15 01:58:51 af867b kubelet[27751]: I1115 01:58:51.244501 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientDisk' Node af867b status is now: NodeHasSufficientDisk
Nov 15 01:58:51 af867b kubelet[27751]: I1115 01:58:51.244525 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node af867b status is now: NodeHasSufficientMemory
Nov 15 01:58:51 af867b kubelet[27751]: I1115 01:58:51.244539 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node af867b status is now: NodeHasNoDiskPressure
Nov 15 01:58:51 af867b kubelet[27751]: E1115 01:58:51.244816 27751 kubelet_node_status.go:107] Unable to register node "af867b" with API server: Post https://10.241.226.117:6443/api/v1/nodes: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:51 af867b kubelet[27751]: I1115 01:58:51.373119 27751 reflector.go:240] Listing and watching *v1.Pod from k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47
Nov 15 01:58:51 af867b kubelet[27751]: E1115 01:58:51.374210 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.241.226.117:6443/api/v1/pods?fieldSelector=spec.nodeName%3Daf867b&resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:51 af867b kubelet[27751]: I1115 01:58:51.374699 27751 reflector.go:240] Listing and watching *v1.Node from k8s.io/kubernetes/pkg/kubelet/kubelet.go:422
Nov 15 01:58:51 af867b kubelet[27751]: E1115 01:58:51.375297 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.241.226.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Daf867b&resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:51 af867b kubelet[27751]: I1115 01:58:51.376957 27751 reflector.go:240] Listing and watching *v1.Service from k8s.io/kubernetes/pkg/kubelet/kubelet.go:413
Nov 15 01:58:51 af867b kubelet[27751]: E1115 01:58:51.377422 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.241.226.117:6443/api/v1/services?resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:52 af867b kubelet[27751]: I1115 01:58:52.045079 27751 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Nov 15 01:58:52 af867b kubelet[27751]: I1115 01:58:52.048064 27751 kubelet_node_status.go:443] Recording NodeHasSufficientDisk event message for node af867b
Nov 15 01:58:52 af867b kubelet[27751]: I1115 01:58:52.048095 27751 kubelet_node_status.go:443] Recording NodeHasSufficientMemory event message for node af867b
Nov 15 01:58:52 af867b kubelet[27751]: I1115 01:58:52.048105 27751 kubelet_node_status.go:443] Recording NodeHasNoDiskPressure event message for node af867b
Nov 15 01:58:52 af867b kubelet[27751]: I1115 01:58:52.048123 27751 kubelet_node_status.go:83] Attempting to register node af867b
Nov 15 01:58:52 af867b kubelet[27751]: I1115 01:58:52.048440 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientDisk' Node af867b status is now: NodeHasSufficientDisk
Nov 15 01:58:52 af867b kubelet[27751]: I1115 01:58:52.048464 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node af867b status is now: NodeHasSufficientMemory
Nov 15 01:58:52 af867b kubelet[27751]: I1115 01:58:52.048479 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node af867b status is now: NodeHasNoDiskPressure
Nov 15 01:58:52 af867b kubelet[27751]: E1115 01:58:52.051022 27751 kubelet_node_status.go:107] Unable to register node "af867b" with API server: Post https://10.241.226.117:6443/api/v1/nodes: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:52 af867b kubelet[27751]: I1115 01:58:52.374455 27751 reflector.go:240] Listing and watching *v1.Pod from k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47
Nov 15 01:58:52 af867b kubelet[27751]: I1115 01:58:52.375621 27751 reflector.go:240] Listing and watching *v1.Node from k8s.io/kubernetes/pkg/kubelet/kubelet.go:422
Nov 15 01:58:52 af867b kubelet[27751]: E1115 01:58:52.377170 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.241.226.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Daf867b&resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:52 af867b kubelet[27751]: E1115 01:58:52.377231 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.241.226.117:6443/api/v1/pods?fieldSelector=spec.nodeName%3Daf867b&resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:52 af867b kubelet[27751]: I1115 01:58:52.377944 27751 reflector.go:240] Listing and watching *v1.Service from k8s.io/kubernetes/pkg/kubelet/kubelet.go:413
Nov 15 01:58:52 af867b kubelet[27751]: E1115 01:58:52.378771 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.241.226.117:6443/api/v1/services?resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:53 af867b kubelet[27751]: I1115 01:58:53.377441 27751 reflector.go:240] Listing and watching *v1.Node from k8s.io/kubernetes/pkg/kubelet/kubelet.go:422
Nov 15 01:58:53 af867b kubelet[27751]: E1115 01:58:53.378677 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.241.226.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Daf867b&resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:53 af867b kubelet[27751]: I1115 01:58:53.379207 27751 reflector.go:240] Listing and watching *v1.Pod from k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47
Nov 15 01:58:53 af867b kubelet[27751]: E1115 01:58:53.379755 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.241.226.117:6443/api/v1/pods?fieldSelector=spec.nodeName%3Daf867b&resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:53 af867b kubelet[27751]: I1115 01:58:53.380654 27751 reflector.go:240] Listing and watching *v1.Service from k8s.io/kubernetes/pkg/kubelet/kubelet.go:413
Nov 15 01:58:53 af867b kubelet[27751]: E1115 01:58:53.381163 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.241.226.117:6443/api/v1/services?resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:53 af867b kubelet[27751]: I1115 01:58:53.651300 27751 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Nov 15 01:58:53 af867b kubelet[27751]: I1115 01:58:53.654073 27751 kubelet_node_status.go:443] Recording NodeHasSufficientDisk event message for node af867b
Nov 15 01:58:53 af867b kubelet[27751]: I1115 01:58:53.654107 27751 kubelet_node_status.go:443] Recording NodeHasSufficientMemory event message for node af867b
Nov 15 01:58:53 af867b kubelet[27751]: I1115 01:58:53.654119 27751 kubelet_node_status.go:443] Recording NodeHasNoDiskPressure event message for node af867b
Nov 15 01:58:53 af867b kubelet[27751]: I1115 01:58:53.654136 27751 kubelet_node_status.go:83] Attempting to register node af867b
Nov 15 01:58:53 af867b kubelet[27751]: I1115 01:58:53.654401 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientDisk' Node af867b status is now: NodeHasSufficientDisk
Nov 15 01:58:53 af867b kubelet[27751]: I1115 01:58:53.654425 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node af867b status is now: NodeHasSufficientMemory
Nov 15 01:58:53 af867b kubelet[27751]: I1115 01:58:53.654446 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node af867b status is now: NodeHasNoDiskPressure
Nov 15 01:58:53 af867b kubelet[27751]: E1115 01:58:53.656627 27751 kubelet_node_status.go:107] Unable to register node "af867b" with API server: Post https://10.241.226.117:6443/api/v1/nodes: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:54 af867b kubelet[27751]: I1115 01:58:54.378951 27751 reflector.go:240] Listing and watching *v1.Node from k8s.io/kubernetes/pkg/kubelet/kubelet.go:422
Nov 15 01:58:54 af867b kubelet[27751]: E1115 01:58:54.380045 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.241.226.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Daf867b&resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:54 af867b kubelet[27751]: I1115 01:58:54.380337 27751 reflector.go:240] Listing and watching *v1.Pod from k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47
Nov 15 01:58:54 af867b kubelet[27751]: E1115 01:58:54.380834 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.241.226.117:6443/api/v1/pods?fieldSelector=spec.nodeName%3Daf867b&resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:54 af867b kubelet[27751]: I1115 01:58:54.381390 27751 reflector.go:240] Listing and watching *v1.Service from k8s.io/kubernetes/pkg/kubelet/kubelet.go:413
Nov 15 01:58:54 af867b kubelet[27751]: E1115 01:58:54.382064 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.241.226.117:6443/api/v1/services?resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.380290 27751 reflector.go:240] Listing and watching *v1.Node from k8s.io/kubernetes/pkg/kubelet/kubelet.go:422
Nov 15 01:58:55 af867b kubelet[27751]: E1115 01:58:55.381601 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.241.226.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Daf867b&resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.382213 27751 reflector.go:240] Listing and watching *v1.Pod from k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.382899 27751 reflector.go:240] Listing and watching *v1.Service from k8s.io/kubernetes/pkg/kubelet/kubelet.go:413
Nov 15 01:58:55 af867b kubelet[27751]: E1115 01:58:55.383112 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.241.226.117:6443/api/v1/pods?fieldSelector=spec.nodeName%3Daf867b&resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:55 af867b kubelet[27751]: E1115 01:58:55.383963 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.241.226.117:6443/api/v1/services?resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.523483 27751 kubelet.go:1837] SyncLoop (ADD, "file"): "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9), kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373), etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250), kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.523580 27751 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.526532 27751 kubelet_node_status.go:443] Recording NodeHasSufficientDisk event message for node af867b
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.526565 27751 kubelet_node_status.go:443] Recording NodeHasSufficientMemory event message for node af867b
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.526577 27751 kubelet_node_status.go:443] Recording NodeHasNoDiskPressure event message for node af867b
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.526697 27751 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.526935 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node af867b status is now: NodeHasNoDiskPressure
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.526959 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientDisk' Node af867b status is now: NodeHasSufficientDisk
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.526983 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node af867b status is now: NodeHasSufficientMemory
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.527047 27751 kubelet_pods.go:1284] Generating status for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.527138 27751 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.530596 27751 kubelet_node_status.go:443] Recording NodeHasSufficientDisk event message for node af867b
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.530625 27751 kubelet_node_status.go:443] Recording NodeHasSufficientMemory event message for node af867b
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.530638 27751 kubelet_node_status.go:443] Recording NodeHasNoDiskPressure event message for node af867b
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.530692 27751 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.531034 27751 kubelet_node_status.go:443] Recording NodeHasSufficientDisk event message for node af867b
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.531048 27751 kubelet_node_status.go:443] Recording NodeHasSufficientMemory event message for node af867b
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.531058 27751 kubelet_node_status.go:443] Recording NodeHasNoDiskPressure event message for node af867b
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.531298 27751 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.532667 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientDisk' Node af867b status is now: NodeHasSufficientDisk
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.532694 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node af867b status is now: NodeHasSufficientMemory
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.532708 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node af867b status is now: NodeHasNoDiskPressure
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.532750 27751 kubelet_pods.go:1284] Generating status for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.532821 27751 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.532847 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node af867b status is now: NodeHasSufficientMemory
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.532864 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientDisk' Node af867b status is now: NodeHasSufficientDisk
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.532877 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node af867b status is now: NodeHasNoDiskPressure
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.534922 27751 factory.go:116] Factory "docker" was unable to handle container "/kubepods/burstable/podf49ee4da5c66af63a0b4bcea4f69baf9"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.534944 27751 factory.go:105] Error trying to work out if we can handle /kubepods/burstable/podf49ee4da5c66af63a0b4bcea4f69baf9: /kubepods/burstable/podf49ee4da5c66af63a0b4bcea4f69baf9 not handled by systemd handler
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.534951 27751 factory.go:116] Factory "systemd" was unable to handle container "/kubepods/burstable/podf49ee4da5c66af63a0b4bcea4f69baf9"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.534961 27751 factory.go:112] Using factory "raw" for container "/kubepods/burstable/podf49ee4da5c66af63a0b4bcea4f69baf9"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.535284 27751 manager.go:932] Added container: "/kubepods/burstable/podf49ee4da5c66af63a0b4bcea4f69baf9" (aliases: [], namespace: "")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.535403 27751 handler.go:325] Added event &{/kubepods/burstable/podf49ee4da5c66af63a0b4bcea4f69baf9 2017-11-15 01:58:55.533194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.535438 27751 container.go:409] Start housekeeping for container "/kubepods/burstable/podf49ee4da5c66af63a0b4bcea4f69baf9"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.536806 27751 kubelet.go:1610] Creating a mirror pod for static pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 01:58:55 af867b kubelet[27751]: W1115 01:58:55.537114 27751 status_manager.go:431] Failed to get status for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)": Get https://10.241.226.117:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-af867b: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:55 af867b kubelet[27751]: E1115 01:58:55.537690 27751 kubelet.go:1612] Failed creating a mirror pod for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)": Post https://10.241.226.117:6443/api/v1/namespaces/kube-system/pods: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.540049 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.540637 27751 kubelet_node_status.go:443] Recording NodeHasSufficientDisk event message for node af867b
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.540651 27751 kubelet_node_status.go:443] Recording NodeHasSufficientMemory event message for node af867b
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.540660 27751 kubelet_node_status.go:443] Recording NodeHasNoDiskPressure event message for node af867b
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.540703 27751 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.541049 27751 kubelet_node_status.go:443] Recording NodeHasSufficientDisk event message for node af867b
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.541063 27751 kubelet_node_status.go:443] Recording NodeHasSufficientMemory event message for node af867b
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.541071 27751 kubelet_node_status.go:443] Recording NodeHasNoDiskPressure event message for node af867b
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.541235 27751 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.542938 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientDisk' Node af867b status is now: NodeHasSufficientDisk
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.542966 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node af867b status is now: NodeHasSufficientMemory
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.542981 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node af867b status is now: NodeHasNoDiskPressure
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.543023 27751 kubelet_pods.go:1284] Generating status for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.543087 27751 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.543110 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node af867b status is now: NodeHasSufficientMemory
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.543132 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientDisk' Node af867b status is now: NodeHasSufficientDisk
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.543147 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node af867b status is now: NodeHasNoDiskPressure
Nov 15 01:58:55 af867b kubelet[27751]: W1115 01:58:55.545413 27751 status_manager.go:431] Failed to get status for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)": Get https://10.241.226.117:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-af867b: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.546776 27751 factory.go:116] Factory "docker" was unable to handle container "/kubepods/burstable/podbc22704d9f4dc5d62a8217cfd5c14373"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.546802 27751 factory.go:105] Error trying to work out if we can handle /kubepods/burstable/podbc22704d9f4dc5d62a8217cfd5c14373: /kubepods/burstable/podbc22704d9f4dc5d62a8217cfd5c14373 not handled by systemd handler
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.546809 27751 factory.go:116] Factory "systemd" was unable to handle container "/kubepods/burstable/podbc22704d9f4dc5d62a8217cfd5c14373"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.546817 27751 factory.go:112] Using factory "raw" for container "/kubepods/burstable/podbc22704d9f4dc5d62a8217cfd5c14373"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.546961 27751 manager.go:932] Added container: "/kubepods/burstable/podbc22704d9f4dc5d62a8217cfd5c14373" (aliases: [], namespace: "")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.547065 27751 handler.go:325] Added event &{/kubepods/burstable/podbc22704d9f4dc5d62a8217cfd5c14373 2017-11-15 01:58:55.545194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.547096 27751 container.go:409] Start housekeeping for container "/kubepods/burstable/podbc22704d9f4dc5d62a8217cfd5c14373"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.549252 27751 kubelet.go:1610] Creating a mirror pod for static pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 01:58:55 af867b kubelet[27751]: E1115 01:58:55.549687 27751 kubelet.go:1612] Failed creating a mirror pod for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)": Post https://10.241.226.117:6443/api/v1/namespaces/kube-system/pods: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.549813 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.550088 27751 kubelet_node_status.go:443] Recording NodeHasSufficientDisk event message for node af867b
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.550102 27751 kubelet_node_status.go:443] Recording NodeHasSufficientMemory event message for node af867b
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.550110 27751 kubelet_node_status.go:443] Recording NodeHasNoDiskPressure event message for node af867b
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.550147 27751 kubelet.go:1911] SyncLoop (housekeeping, skipped): sources aren't ready yet.
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.550203 27751 kubelet_node_status.go:443] Recording NodeHasSufficientDisk event message for node af867b
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.550214 27751 kubelet_node_status.go:443] Recording NodeHasSufficientMemory event message for node af867b
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.550224 27751 kubelet_node_status.go:443] Recording NodeHasNoDiskPressure event message for node af867b
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.550348 27751 kubelet.go:1610] Creating a mirror pod for static pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.550504 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientDisk' Node af867b status is now: NodeHasSufficientDisk
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.550526 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node af867b status is now: NodeHasSufficientMemory
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.550538 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node af867b status is now: NodeHasNoDiskPressure
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.550581 27751 kubelet_pods.go:1284] Generating status for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.550676 27751 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Nov 15 01:58:55 af867b kubelet[27751]: E1115 01:58:55.551632 27751 kubelet.go:1612] Failed creating a mirror pod for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)": Post https://10.241.226.117:6443/api/v1/namespaces/kube-system/pods: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.551754 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.551779 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node af867b status is now: NodeHasSufficientMemory
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.551794 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientDisk' Node af867b status is now: NodeHasSufficientDisk
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.551807 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node af867b status is now: NodeHasNoDiskPressure
Nov 15 01:58:55 af867b kubelet[27751]: W1115 01:58:55.552671 27751 status_manager.go:431] Failed to get status for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)": Get https://10.241.226.117:6443/api/v1/namespaces/kube-system/pods/etcd-af867b: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.556070 27751 kubelet_node_status.go:443] Recording NodeHasSufficientDisk event message for node af867b
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.556093 27751 kubelet_node_status.go:443] Recording NodeHasSufficientMemory event message for node af867b
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.556104 27751 kubelet_node_status.go:443] Recording NodeHasNoDiskPressure event message for node af867b
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.556271 27751 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.557117 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientDisk' Node af867b status is now: NodeHasSufficientDisk
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.557140 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node af867b status is now: NodeHasSufficientMemory
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.557154 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node af867b status is now: NodeHasNoDiskPressure
Nov 15 01:58:55 af867b kubelet[27751]: W1115 01:58:55.558331 27751 status_manager.go:431] Failed to get status for pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)": Get https://10.241.226.117:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-af867b: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.560327 27751 factory.go:116] Factory "docker" was unable to handle container "/kubepods/burstable/pod4e0fac5dee63099d647b4d031a37ad7d"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.560353 27751 factory.go:105] Error trying to work out if we can handle /kubepods/burstable/pod4e0fac5dee63099d647b4d031a37ad7d: /kubepods/burstable/pod4e0fac5dee63099d647b4d031a37ad7d not handled by systemd handler
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.560359 27751 factory.go:116] Factory "systemd" was unable to handle container "/kubepods/burstable/pod4e0fac5dee63099d647b4d031a37ad7d"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.560367 27751 factory.go:112] Using factory "raw" for container "/kubepods/burstable/pod4e0fac5dee63099d647b4d031a37ad7d"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.560491 27751 manager.go:932] Added container: "/kubepods/burstable/pod4e0fac5dee63099d647b4d031a37ad7d" (aliases: [], namespace: "")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.560588 27751 handler.go:325] Added event &{/kubepods/burstable/pod4e0fac5dee63099d647b4d031a37ad7d 2017-11-15 01:58:55.559194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.560623 27751 container.go:409] Start housekeeping for container "/kubepods/burstable/pod4e0fac5dee63099d647b4d031a37ad7d"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.563010 27751 kubelet.go:1610] Creating a mirror pod for static pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 01:58:55 af867b kubelet[27751]: E1115 01:58:55.563455 27751 kubelet.go:1612] Failed creating a mirror pod for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)": Post https://10.241.226.117:6443/api/v1/namespaces/kube-system/pods: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.563649 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.624024 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/f49ee4da5c66af63a0b4bcea4f69baf9-k8s-certs") pod "kube-controller-manager-af867b" (UID: "f49ee4da5c66af63a0b4bcea4f69baf9")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.624065 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/f49ee4da5c66af63a0b4bcea4f69baf9-ca-certs") pod "kube-controller-manager-af867b" (UID: "f49ee4da5c66af63a0b4bcea4f69baf9")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.624122 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/f49ee4da5c66af63a0b4bcea4f69baf9-kubeconfig") pod "kube-controller-manager-af867b" (UID: "f49ee4da5c66af63a0b4bcea4f69baf9")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.624146 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/f49ee4da5c66af63a0b4bcea4f69baf9-flexvolume-dir") pod "kube-controller-manager-af867b" (UID: "f49ee4da5c66af63a0b4bcea4f69baf9")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.624172 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs-etc-pki" (UniqueName: "kubernetes.io/host-path/f49ee4da5c66af63a0b4bcea4f69baf9-ca-certs-etc-pki") pod "kube-controller-manager-af867b" (UID: "f49ee4da5c66af63a0b4bcea4f69baf9")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.724088 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/f49ee4da5c66af63a0b4bcea4f69baf9-flexvolume-dir") pod "kube-controller-manager-af867b" (UID: "f49ee4da5c66af63a0b4bcea4f69baf9")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.724154 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd" (UniqueName: "kubernetes.io/host-path/d76e26fba3bf2bfd215eb29011d55250-etcd") pod "etcd-af867b" (UID: "d76e26fba3bf2bfd215eb29011d55250")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.724185 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/4e0fac5dee63099d647b4d031a37ad7d-ca-certs") pod "kube-apiserver-af867b" (UID: "4e0fac5dee63099d647b4d031a37ad7d")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.724210 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs-etc-pki" (UniqueName: "kubernetes.io/host-path/4e0fac5dee63099d647b4d031a37ad7d-ca-certs-etc-pki") pod "kube-apiserver-af867b" (UID: "4e0fac5dee63099d647b4d031a37ad7d")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.724236 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/bc22704d9f4dc5d62a8217cfd5c14373-kubeconfig") pod "kube-scheduler-af867b" (UID: "bc22704d9f4dc5d62a8217cfd5c14373")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.724270 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/f49ee4da5c66af63a0b4bcea4f69baf9-k8s-certs") pod "kube-controller-manager-af867b" (UID: "f49ee4da5c66af63a0b4bcea4f69baf9")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.724297 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/f49ee4da5c66af63a0b4bcea4f69baf9-ca-certs") pod "kube-controller-manager-af867b" (UID: "f49ee4da5c66af63a0b4bcea4f69baf9")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.724325 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/f49ee4da5c66af63a0b4bcea4f69baf9-kubeconfig") pod "kube-controller-manager-af867b" (UID: "f49ee4da5c66af63a0b4bcea4f69baf9")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.724363 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "ca-certs-etc-pki" (UniqueName: "kubernetes.io/host-path/f49ee4da5c66af63a0b4bcea4f69baf9-ca-certs-etc-pki") pod "kube-controller-manager-af867b" (UID: "f49ee4da5c66af63a0b4bcea4f69baf9")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.724394 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/4e0fac5dee63099d647b4d031a37ad7d-k8s-certs") pod "kube-apiserver-af867b" (UID: "4e0fac5dee63099d647b4d031a37ad7d")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.724538 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/f49ee4da5c66af63a0b4bcea4f69baf9-flexvolume-dir") pod "kube-controller-manager-af867b" (UID: "f49ee4da5c66af63a0b4bcea4f69baf9")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.724573 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-controller-manager-af867b", UID:"f49ee4da5c66af63a0b4bcea4f69baf9", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "flexvolume-dir"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.724677 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/f49ee4da5c66af63a0b4bcea4f69baf9-k8s-certs") pod "kube-controller-manager-af867b" (UID: "f49ee4da5c66af63a0b4bcea4f69baf9")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.724695 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-controller-manager-af867b", UID:"f49ee4da5c66af63a0b4bcea4f69baf9", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "k8s-certs"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.724769 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/f49ee4da5c66af63a0b4bcea4f69baf9-ca-certs") pod "kube-controller-manager-af867b" (UID: "f49ee4da5c66af63a0b4bcea4f69baf9")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.724789 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-controller-manager-af867b", UID:"f49ee4da5c66af63a0b4bcea4f69baf9", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "ca-certs"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.724834 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/f49ee4da5c66af63a0b4bcea4f69baf9-kubeconfig") pod "kube-controller-manager-af867b" (UID: "f49ee4da5c66af63a0b4bcea4f69baf9")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.724853 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-controller-manager-af867b", UID:"f49ee4da5c66af63a0b4bcea4f69baf9", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "kubeconfig"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.724891 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "ca-certs-etc-pki" (UniqueName: "kubernetes.io/host-path/f49ee4da5c66af63a0b4bcea4f69baf9-ca-certs-etc-pki") pod "kube-controller-manager-af867b" (UID: "f49ee4da5c66af63a0b4bcea4f69baf9")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.724906 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-controller-manager-af867b", UID:"f49ee4da5c66af63a0b4bcea4f69baf9", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "ca-certs-etc-pki"
Nov 15 01:58:55 af867b kubelet[27751]: W1115 01:58:55.793547 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.793700 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 01:58:55 af867b kubelet[27751]: E1115 01:58:55.793746 27751 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.824008 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/4e0fac5dee63099d647b4d031a37ad7d-k8s-certs") pod "kube-apiserver-af867b" (UID: "4e0fac5dee63099d647b4d031a37ad7d")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.824048 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "etcd" (UniqueName: "kubernetes.io/host-path/d76e26fba3bf2bfd215eb29011d55250-etcd") pod "etcd-af867b" (UID: "d76e26fba3bf2bfd215eb29011d55250")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.824101 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/4e0fac5dee63099d647b4d031a37ad7d-ca-certs") pod "kube-apiserver-af867b" (UID: "4e0fac5dee63099d647b4d031a37ad7d")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.824133 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "ca-certs-etc-pki" (UniqueName: "kubernetes.io/host-path/4e0fac5dee63099d647b4d031a37ad7d-ca-certs-etc-pki") pod "kube-apiserver-af867b" (UID: "4e0fac5dee63099d647b4d031a37ad7d")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.824162 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/bc22704d9f4dc5d62a8217cfd5c14373-kubeconfig") pod "kube-scheduler-af867b" (UID: "bc22704d9f4dc5d62a8217cfd5c14373")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.824238 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/bc22704d9f4dc5d62a8217cfd5c14373-kubeconfig") pod "kube-scheduler-af867b" (UID: "bc22704d9f4dc5d62a8217cfd5c14373")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.824269 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-scheduler-af867b", UID:"bc22704d9f4dc5d62a8217cfd5c14373", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "kubeconfig"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.824315 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/4e0fac5dee63099d647b4d031a37ad7d-k8s-certs") pod "kube-apiserver-af867b" (UID: "4e0fac5dee63099d647b4d031a37ad7d")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.824330 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-af867b", UID:"4e0fac5dee63099d647b4d031a37ad7d", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "k8s-certs"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.824377 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "etcd" (UniqueName: "kubernetes.io/host-path/d76e26fba3bf2bfd215eb29011d55250-etcd") pod "etcd-af867b" (UID: "d76e26fba3bf2bfd215eb29011d55250")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.824394 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"etcd-af867b", UID:"d76e26fba3bf2bfd215eb29011d55250", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "etcd"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.824438 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/4e0fac5dee63099d647b4d031a37ad7d-ca-certs") pod "kube-apiserver-af867b" (UID: "4e0fac5dee63099d647b4d031a37ad7d")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.824453 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-af867b", UID:"4e0fac5dee63099d647b4d031a37ad7d", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "ca-certs"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.824489 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "ca-certs-etc-pki" (UniqueName: "kubernetes.io/host-path/4e0fac5dee63099d647b4d031a37ad7d-ca-certs-etc-pki") pod "kube-apiserver-af867b" (UID: "4e0fac5dee63099d647b4d031a37ad7d")
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.824502 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-af867b", UID:"4e0fac5dee63099d647b4d031a37ad7d", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "ca-certs-etc-pki"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.840263 27751 volume_manager.go:366] All volumes are attached and mounted for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.840291 27751 kuberuntime_manager.go:370] No sandbox for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)" can be found. Need to start a new one
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.840305 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:true CreateSandbox:true SandboxID: Attempt:0 NextInitContainerToStart:nil ContainersToStart:[0] ContainersToKill:map[]} for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.840362 27751 kuberuntime_manager.go:565] SyncPod received new pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)", will create a sandbox for it
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.840371 27751 kuberuntime_manager.go:574] Stopping PodSandbox for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)", will start new one
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.840402 27751 kuberuntime_manager.go:626] Creating sandbox for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.842967 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/burstable/podf49ee4da5c66af63a0b4bcea4f69baf9"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.842990 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/burstable/podf49ee4da5c66af63a0b4bcea4f69baf9"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.851861 27751 volume_manager.go:366] All volumes are attached and mounted for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.851882 27751 kuberuntime_manager.go:370] No sandbox for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)" can be found. Need to start a new one
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.851893 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:true CreateSandbox:true SandboxID: Attempt:0 NextInitContainerToStart:nil ContainersToStart:[0] ContainersToKill:map[]} for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.851927 27751 kuberuntime_manager.go:565] SyncPod received new pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)", will create a sandbox for it
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.851936 27751 kuberuntime_manager.go:574] Stopping PodSandbox for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)", will start new one
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.851952 27751 kuberuntime_manager.go:626] Creating sandbox for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.852148 27751 volume_manager.go:366] All volumes are attached and mounted for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.852162 27751 kuberuntime_manager.go:370] No sandbox for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)" can be found. Need to start a new one
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.852170 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:true CreateSandbox:true SandboxID: Attempt:0 NextInitContainerToStart:nil ContainersToStart:[0] ContainersToKill:map[]} for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.852205 27751 kuberuntime_manager.go:565] SyncPod received new pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)", will create a sandbox for it
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.852213 27751 kuberuntime_manager.go:574] Stopping PodSandbox for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)", will start new one
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.852227 27751 kuberuntime_manager.go:626] Creating sandbox for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.863878 27751 volume_manager.go:366] All volumes are attached and mounted for pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.863898 27751 kuberuntime_manager.go:370] No sandbox for pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)" can be found. Need to start a new one
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.863907 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:true CreateSandbox:true SandboxID: Attempt:0 NextInitContainerToStart:nil ContainersToStart:[0] ContainersToKill:map[]} for pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.863938 27751 kuberuntime_manager.go:565] SyncPod received new pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)", will create a sandbox for it
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.863947 27751 kuberuntime_manager.go:574] Stopping PodSandbox for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)", will start new one
Nov 15 01:58:55 af867b kubelet[27751]: I1115 01:58:55.863961 27751 kuberuntime_manager.go:626] Creating sandbox for pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.019261 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/burstable/pod4e0fac5dee63099d647b4d031a37ad7d"
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.019307 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/burstable/pod4e0fac5dee63099d647b4d031a37ad7d"
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.019680 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/besteffort/podd76e26fba3bf2bfd215eb29011d55250"
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.019692 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/besteffort/podd76e26fba3bf2bfd215eb29011d55250"
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.019966 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/burstable/podbc22704d9f4dc5d62a8217cfd5c14373"
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.019977 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/burstable/podbc22704d9f4dc5d62a8217cfd5c14373"
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.381837 27751 reflector.go:240] Listing and watching *v1.Node from k8s.io/kubernetes/pkg/kubelet/kubelet.go:422
Nov 15 01:58:56 af867b kubelet[27751]: E1115 01:58:56.382646 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.241.226.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Daf867b&resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.383318 27751 reflector.go:240] Listing and watching *v1.Pod from k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47
Nov 15 01:58:56 af867b kubelet[27751]: E1115 01:58:56.383848 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.241.226.117:6443/api/v1/pods?fieldSelector=spec.nodeName%3Daf867b&resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.384608 27751 reflector.go:240] Listing and watching *v1.Service from k8s.io/kubernetes/pkg/kubelet/kubelet.go:413
Nov 15 01:58:56 af867b kubelet[27751]: E1115 01:58:56.385126 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.241.226.117:6443/api/v1/services?resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.530696 27751 kubelet.go:1911] SyncLoop (housekeeping, skipped): sources aren't ready yet.
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.742544 27751 docker_sandbox.go:691] Will attempt to re-write config file /var/lib/docker/containers/439f518d46ec0d596518b7c17c503c52a85dd4da21d8a56ec174d5b6b3b98e03/resolv.conf with:
Nov 15 01:58:56 af867b kubelet[27751]: [nameserver 10.196.65.209 search opcwlaas.oraclecloud.internal. opcwlaas.oraclecloud.internal.]
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.743421 27751 factory.go:112] Using factory "docker" for container "/kubepods/burstable/podf49ee4da5c66af63a0b4bcea4f69baf9/439f518d46ec0d596518b7c17c503c52a85dd4da21d8a56ec174d5b6b3b98e03"
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.743703 27751 kuberuntime_manager.go:640] Created PodSandbox "439f518d46ec0d596518b7c17c503c52a85dd4da21d8a56ec174d5b6b3b98e03" for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.791640 27751 manager.go:932] Added container: "/kubepods/burstable/podf49ee4da5c66af63a0b4bcea4f69baf9/439f518d46ec0d596518b7c17c503c52a85dd4da21d8a56ec174d5b6b3b98e03" (aliases: [k8s_POD_kube-controller-manager-af867b_kube-system_f49ee4da5c66af63a0b4bcea4f69baf9_0 439f518d46ec0d596518b7c17c503c52a85dd4da21d8a56ec174d5b6b3b98e03], namespace: "docker")
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.791801 27751 handler.go:325] Added event &{/kubepods/burstable/podf49ee4da5c66af63a0b4bcea4f69baf9/439f518d46ec0d596518b7c17c503c52a85dd4da21d8a56ec174d5b6b3b98e03 2017-11-15 01:58:56.323194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.791930 27751 container.go:409] Start housekeeping for container "/kubepods/burstable/podf49ee4da5c66af63a0b4bcea4f69baf9/439f518d46ec0d596518b7c17c503c52a85dd4da21d8a56ec174d5b6b3b98e03"
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.794801 27751 docker_sandbox.go:691] Will attempt to re-write config file /var/lib/docker/containers/fe18d337ce42bd3a4d2aacb1349be028274681d8073fbe5d6377946e815fa810/resolv.conf with:
Nov 15 01:58:56 af867b kubelet[27751]: [nameserver 10.196.65.209 search opcwlaas.oraclecloud.internal. opcwlaas.oraclecloud.internal.]
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.795300 27751 kuberuntime_manager.go:640] Created PodSandbox "fe18d337ce42bd3a4d2aacb1349be028274681d8073fbe5d6377946e815fa810" for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.795444 27751 generic.go:146] GenericPLEG: bc22704d9f4dc5d62a8217cfd5c14373/fe18d337ce42bd3a4d2aacb1349be028274681d8073fbe5d6377946e815fa810: non-existent -> running
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.795475 27751 generic.go:146] GenericPLEG: f49ee4da5c66af63a0b4bcea4f69baf9/439f518d46ec0d596518b7c17c503c52a85dd4da21d8a56ec174d5b6b3b98e03: non-existent -> running
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.795830 27751 kuberuntime_manager.go:705] Creating container &Container{Name:kube-controller-manager,Image:gcr.io/google_containers/kube-controller-manager-amd64:v1.8.3,Command:[kube-controller-manager --leader-elect=true --use-service-account-credentials=true --controllers=*,bootstrapsigner,tokencleaner --root-ca-file=/etc/kubernetes/pki/ca.crt --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --address=127.0.0.1 --kubeconfig=/etc/kubernetes/controller-manager.conf --service-account-private-key-file=/etc/kubernetes/pki/sa.key --cluster-signing-key-file=/etc/kubernetes/pki/ca.key],Args:[],WorkingDir:,Ports:[],Env:[{http_proxy http://www-proxy.us.oracle.com:80 nil} {https_proxy https://www-proxy.us.oracle.com:80 nil} {no_proxy 10.241.226.117 nil}],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{200 -3} {<nil>} 200m DecimalSI},},},VolumeMounts:[{k8s-certs true /etc/kubernetes/pki <nil>} {ca-certs true /etc/ssl/certs <nil>} {kubeconfig true /etc/kubernetes/controller-manager.conf <nil>} {flexvolume-dir false /usr/libexec/kubernetes/kubelet-plugins/volume/exec <nil>} {ca-certs-etc-pki true /etc/pki <nil>}],LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[],TerminationMessagePolicy:File,} in pod kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.835144 27751 factory.go:112] Using factory "docker" for container "/kubepods/burstable/podbc22704d9f4dc5d62a8217cfd5c14373/fe18d337ce42bd3a4d2aacb1349be028274681d8073fbe5d6377946e815fa810"
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.836202 27751 kuberuntime_container.go:100] Generating ref for container kube-controller-manager: &v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-controller-manager-af867b", UID:"f49ee4da5c66af63a0b4bcea4f69baf9", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-controller-manager}"}
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.836240 27751 container_manager_linux.go:634] Calling devicePluginHandler AllocateDevices
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.836279 27751 kubelet_pods.go:123] container: kube-system/kube-controller-manager-af867b/kube-controller-manager podIP: "10.196.65.210" creating hosts mount: true
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.836531 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-controller-manager-af867b", UID:"f49ee4da5c66af63a0b4bcea4f69baf9", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-controller-manager}"}): type: 'Normal' reason: 'Pulled' Container image "gcr.io/google_containers/kube-controller-manager-amd64:v1.8.3" already present on machine
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.837154 27751 kuberuntime_manager.go:705] Creating container &Container{Name:kube-scheduler,Image:gcr.io/google_containers/kube-scheduler-amd64:v1.8.3,Command:[kube-scheduler --kubeconfig=/etc/kubernetes/scheduler.conf --address=127.0.0.1 --leader-elect=true],Args:[],WorkingDir:,Ports:[],Env:[{http_proxy http://www-proxy.us.oracle.com:80 nil} {https_proxy https://www-proxy.us.oracle.com:80 nil} {no_proxy 10.241.226.117 nil}],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},},},VolumeMounts:[{kubeconfig true /etc/kubernetes/scheduler.conf <nil>}],LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10251,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[],TerminationMessagePolicy:File,} in pod kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.857871 27751 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.860814 27751 kubelet_node_status.go:443] Recording NodeHasSufficientDisk event message for node af867b
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.860839 27751 kubelet_node_status.go:443] Recording NodeHasSufficientMemory event message for node af867b
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.860848 27751 kubelet_node_status.go:443] Recording NodeHasNoDiskPressure event message for node af867b
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.860870 27751 kubelet_node_status.go:83] Attempting to register node af867b
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.861111 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientDisk' Node af867b status is now: NodeHasSufficientDisk
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.861133 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node af867b status is now: NodeHasSufficientMemory
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.861145 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node af867b status is now: NodeHasNoDiskPressure
Nov 15 01:58:56 af867b kubelet[27751]: E1115 01:58:56.862725 27751 kubelet_node_status.go:107] Unable to register node "af867b" with API server: Post https://10.241.226.117:6443/api/v1/nodes: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.907098 27751 manager.go:932] Added container: "/kubepods/burstable/podbc22704d9f4dc5d62a8217cfd5c14373/fe18d337ce42bd3a4d2aacb1349be028274681d8073fbe5d6377946e815fa810" (aliases: [k8s_POD_kube-scheduler-af867b_kube-system_bc22704d9f4dc5d62a8217cfd5c14373_0 fe18d337ce42bd3a4d2aacb1349be028274681d8073fbe5d6377946e815fa810], namespace: "docker")
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.907268 27751 handler.go:325] Added event &{/kubepods/burstable/podbc22704d9f4dc5d62a8217cfd5c14373/fe18d337ce42bd3a4d2aacb1349be028274681d8073fbe5d6377946e815fa810 2017-11-15 01:58:56.650194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.907484 27751 container.go:409] Start housekeeping for container "/kubepods/burstable/podbc22704d9f4dc5d62a8217cfd5c14373/fe18d337ce42bd3a4d2aacb1349be028274681d8073fbe5d6377946e815fa810"
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.908891 27751 kuberuntime_container.go:100] Generating ref for container kube-scheduler: &v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-scheduler-af867b", UID:"bc22704d9f4dc5d62a8217cfd5c14373", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-scheduler}"}
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.908937 27751 container_manager_linux.go:634] Calling devicePluginHandler AllocateDevices
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.908980 27751 kubelet_pods.go:123] container: kube-system/kube-scheduler-af867b/kube-scheduler podIP: "10.196.65.210" creating hosts mount: true
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.909247 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-scheduler-af867b", UID:"bc22704d9f4dc5d62a8217cfd5c14373", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-scheduler}"}): type: 'Normal' reason: 'Pulled' Container image "gcr.io/google_containers/kube-scheduler-amd64:v1.8.3" already present on machine
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.910257 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/burstable/podf49ee4da5c66af63a0b4bcea4f69baf9"
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.923653 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/burstable/podbc22704d9f4dc5d62a8217cfd5c14373"
Nov 15 01:58:56 af867b kubelet[27751]: I1115 01:58:56.930830 27751 kuberuntime_manager.go:833] getSandboxIDByPodUID got sandbox IDs ["fe18d337ce42bd3a4d2aacb1349be028274681d8073fbe5d6377946e815fa810"] for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.170072 27751 factory.go:112] Using factory "docker" for container "/kubepods/burstable/pod4e0fac5dee63099d647b4d031a37ad7d/19b06ca14052c17f92bf03d5316fd7642fe4fee6bf193755575355be3e87a13d"
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.171018 27751 docker_sandbox.go:691] Will attempt to re-write config file /var/lib/docker/containers/19b06ca14052c17f92bf03d5316fd7642fe4fee6bf193755575355be3e87a13d/resolv.conf with:
Nov 15 01:58:57 af867b kubelet[27751]: [nameserver 10.196.65.209 search opcwlaas.oraclecloud.internal. opcwlaas.oraclecloud.internal.]
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.171442 27751 kuberuntime_manager.go:640] Created PodSandbox "19b06ca14052c17f92bf03d5316fd7642fe4fee6bf193755575355be3e87a13d" for pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.171561 27751 generic.go:345] PLEG: Write status for kube-scheduler-af867b/kube-system: &container.PodStatus{ID:"bc22704d9f4dc5d62a8217cfd5c14373", Name:"kube-scheduler-af867b", Namespace:"kube-system", IP:"", ContainerStatuses:[]*container.ContainerStatus{}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc42123c280)}} (err: <nil>)
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.171652 27751 kubelet.go:1871] SyncLoop (PLEG): "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)", event: &pleg.PodLifecycleEvent{ID:"bc22704d9f4dc5d62a8217cfd5c14373", Type:"ContainerStarted", Data:"fe18d337ce42bd3a4d2aacb1349be028274681d8073fbe5d6377946e815fa810"}
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.173323 27751 manager.go:932] Added container: "/kubepods/burstable/pod4e0fac5dee63099d647b4d031a37ad7d/19b06ca14052c17f92bf03d5316fd7642fe4fee6bf193755575355be3e87a13d" (aliases: [k8s_POD_kube-apiserver-af867b_kube-system_4e0fac5dee63099d647b4d031a37ad7d_0 19b06ca14052c17f92bf03d5316fd7642fe4fee6bf193755575355be3e87a13d], namespace: "docker")
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.173461 27751 handler.go:325] Added event &{/kubepods/burstable/pod4e0fac5dee63099d647b4d031a37ad7d/19b06ca14052c17f92bf03d5316fd7642fe4fee6bf193755575355be3e87a13d 2017-11-15 01:58:56.869194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.173508 27751 container.go:409] Start housekeeping for container "/kubepods/burstable/pod4e0fac5dee63099d647b4d031a37ad7d/19b06ca14052c17f92bf03d5316fd7642fe4fee6bf193755575355be3e87a13d"
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.176294 27751 kuberuntime_manager.go:833] getSandboxIDByPodUID got sandbox IDs ["439f518d46ec0d596518b7c17c503c52a85dd4da21d8a56ec174d5b6b3b98e03"] for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.176465 27751 kuberuntime_manager.go:705] Creating container &Container{Name:kube-apiserver,Image:gcr.io/google_containers/kube-apiserver-amd64:v1.8.3,Command:[kube-apiserver --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --enable-bootstrap-token-auth=true --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --secure-port=6443 --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --allow-privileged=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-allowed-names=front-proxy-client --advertise-address=10.241.226.117 --service-cluster-ip-range=10.96.0.0/12 --service-account-key-file=/etc/kubernetes/pki/sa.pub --client-ca-file=/etc/kubernetes/pki/ca.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --insecure-port=0 --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --etcd-servers=http://127.0.0.1:2379],Args:[],WorkingDir:,Ports:[],Env:[{http_proxy http://www-proxy.us.oracle.com:80 nil} {https_proxy https://www-proxy.us.oracle.com:80 nil} {no_proxy 10.241.226.117 nil}],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{250 -3} {<nil>} 250m DecimalSI},},},VolumeMounts:[{k8s-certs true /etc/kubernetes/pki <nil>} {ca-certs true /etc/ssl/certs <nil>} {ca-certs-etc-pki true /etc/pki <nil>}],LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,Perio
Nov 15 01:58:57 af867b kubelet[27751]: dSeconds:10,SuccessThreshold:1,FailureThreshold:8,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[],TerminationMessagePolicy:File,} in pod kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.178352 27751 kuberuntime_container.go:100] Generating ref for container kube-apiserver: &v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-af867b", UID:"4e0fac5dee63099d647b4d031a37ad7d", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.178391 27751 container_manager_linux.go:634] Calling devicePluginHandler AllocateDevices
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.178422 27751 kubelet_pods.go:123] container: kube-system/kube-apiserver-af867b/kube-apiserver podIP: "10.196.65.210" creating hosts mount: true
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.178643 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-af867b", UID:"4e0fac5dee63099d647b4d031a37ad7d", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}): type: 'Normal' reason: 'Pulled' Container image "gcr.io/google_containers/kube-apiserver-amd64:v1.8.3" already present on machine
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.179854 27751 generic.go:345] PLEG: Write status for kube-controller-manager-af867b/kube-system: &container.PodStatus{ID:"f49ee4da5c66af63a0b4bcea4f69baf9", Name:"kube-controller-manager-af867b", Namespace:"kube-system", IP:"", ContainerStatuses:[]*container.ContainerStatus{}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc4211e5900)}} (err: <nil>)
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.179904 27751 kubelet.go:1871] SyncLoop (PLEG): "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)", event: &pleg.PodLifecycleEvent{ID:"f49ee4da5c66af63a0b4bcea4f69baf9", Type:"ContainerStarted", Data:"439f518d46ec0d596518b7c17c503c52a85dd4da21d8a56ec174d5b6b3b98e03"}
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.180382 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/burstable/pod4e0fac5dee63099d647b4d031a37ad7d"
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.382888 27751 reflector.go:240] Listing and watching *v1.Node from k8s.io/kubernetes/pkg/kubelet/kubelet.go:422
Nov 15 01:58:57 af867b kubelet[27751]: E1115 01:58:57.384130 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.241.226.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Daf867b&resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.384180 27751 reflector.go:240] Listing and watching *v1.Pod from k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47
Nov 15 01:58:57 af867b kubelet[27751]: E1115 01:58:57.384811 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.241.226.117:6443/api/v1/pods?fieldSelector=spec.nodeName%3Daf867b&resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.385288 27751 reflector.go:240] Listing and watching *v1.Service from k8s.io/kubernetes/pkg/kubelet/kubelet.go:413
Nov 15 01:58:57 af867b kubelet[27751]: E1115 01:58:57.385978 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.241.226.117:6443/api/v1/services?resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.474546 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-controller-manager-af867b", UID:"f49ee4da5c66af63a0b4bcea4f69baf9", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-controller-manager}"}): type: 'Normal' reason: 'Created' Created container
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.505945 27751 worker.go:164] Probe target container not found: kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9) - kube-controller-manager
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.630310 27751 docker_sandbox.go:691] Will attempt to re-write config file /var/lib/docker/containers/d5076ad0e9fb270d1b8c4ff7cdadbf32d6b1e30dc42ae24dfbc5cb01bb5aa934/resolv.conf with:
Nov 15 01:58:57 af867b kubelet[27751]: [nameserver 10.196.65.209 search opcwlaas.oraclecloud.internal. opcwlaas.oraclecloud.internal.]
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.631917 27751 factory.go:112] Using factory "docker" for container "/kubepods/besteffort/podd76e26fba3bf2bfd215eb29011d55250/d5076ad0e9fb270d1b8c4ff7cdadbf32d6b1e30dc42ae24dfbc5cb01bb5aa934"
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.632232 27751 kuberuntime_manager.go:640] Created PodSandbox "d5076ad0e9fb270d1b8c4ff7cdadbf32d6b1e30dc42ae24dfbc5cb01bb5aa934" for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.681507 27751 worker.go:164] Probe target container not found: kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373) - kube-scheduler
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.746631 27751 manager.go:932] Added container: "/kubepods/besteffort/podd76e26fba3bf2bfd215eb29011d55250/d5076ad0e9fb270d1b8c4ff7cdadbf32d6b1e30dc42ae24dfbc5cb01bb5aa934" (aliases: [k8s_POD_etcd-af867b_kube-system_d76e26fba3bf2bfd215eb29011d55250_0 d5076ad0e9fb270d1b8c4ff7cdadbf32d6b1e30dc42ae24dfbc5cb01bb5aa934], namespace: "docker")
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.746792 27751 handler.go:325] Added event &{/kubepods/besteffort/podd76e26fba3bf2bfd215eb29011d55250/d5076ad0e9fb270d1b8c4ff7cdadbf32d6b1e30dc42ae24dfbc5cb01bb5aa934 2017-11-15 01:58:57.291194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.746916 27751 container.go:409] Start housekeeping for container "/kubepods/besteffort/podd76e26fba3bf2bfd215eb29011d55250/d5076ad0e9fb270d1b8c4ff7cdadbf32d6b1e30dc42ae24dfbc5cb01bb5aa934"
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.752138 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-scheduler-af867b", UID:"bc22704d9f4dc5d62a8217cfd5c14373", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-scheduler}"}): type: 'Normal' reason: 'Created' Created container
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.752304 27751 kuberuntime_manager.go:705] Creating container &Container{Name:etcd,Image:gcr.io/google_containers/etcd-amd64:3.0.17,Command:[etcd --listen-client-urls=http://127.0.0.1:2379 --advertise-client-urls=http://127.0.0.1:2379 --data-dir=/var/lib/etcd],Args:[],WorkingDir:,Ports:[],Env:[],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[{etcd false /var/lib/etcd <nil>}],LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:2379,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[],TerminationMessagePolicy:File,} in pod etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.853969 27751 factory.go:112] Using factory "docker" for container "/kubepods/burstable/podf49ee4da5c66af63a0b4bcea4f69baf9/272af7e3b2b1a203250d154349fdf77f296d7b7f65ce2c77b6b3a94e53dba356"
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.857617 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-controller-manager-af867b", UID:"f49ee4da5c66af63a0b4bcea4f69baf9", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-controller-manager}"}): type: 'Normal' reason: 'Started' Started container
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.984603 27751 manager.go:932] Added container: "/kubepods/burstable/podf49ee4da5c66af63a0b4bcea4f69baf9/272af7e3b2b1a203250d154349fdf77f296d7b7f65ce2c77b6b3a94e53dba356" (aliases: [k8s_kube-controller-manager_kube-controller-manager-af867b_kube-system_f49ee4da5c66af63a0b4bcea4f69baf9_0 272af7e3b2b1a203250d154349fdf77f296d7b7f65ce2c77b6b3a94e53dba356], namespace: "docker")
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.985367 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-scheduler-af867b", UID:"bc22704d9f4dc5d62a8217cfd5c14373", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-scheduler}"}): type: 'Normal' reason: 'Started' Started container
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.985869 27751 kuberuntime_container.go:100] Generating ref for container etcd: &v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"etcd-af867b", UID:"d76e26fba3bf2bfd215eb29011d55250", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{etcd}"}
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.985897 27751 container_manager_linux.go:634] Calling devicePluginHandler AllocateDevices
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.985947 27751 kubelet_pods.go:123] container: kube-system/etcd-af867b/etcd podIP: "10.196.65.210" creating hosts mount: true
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.986201 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"etcd-af867b", UID:"d76e26fba3bf2bfd215eb29011d55250", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{etcd}"}): type: 'Normal' reason: 'Pulled' Container image "gcr.io/google_containers/etcd-amd64:3.0.17" already present on machine
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.986458 27751 handler.go:325] Added event &{/kubepods/burstable/podf49ee4da5c66af63a0b4bcea4f69baf9/272af7e3b2b1a203250d154349fdf77f296d7b7f65ce2c77b6b3a94e53dba356 2017-11-15 01:58:57.645194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.986585 27751 container.go:409] Start housekeeping for container "/kubepods/burstable/podf49ee4da5c66af63a0b4bcea4f69baf9/272af7e3b2b1a203250d154349fdf77f296d7b7f65ce2c77b6b3a94e53dba356"
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.990975 27751 factory.go:112] Using factory "docker" for container "/kubepods/burstable/podbc22704d9f4dc5d62a8217cfd5c14373/413566b22305750f9a9aa46fbe256c11e75293e80f6e0d4afb7ec9e6afcdee05"
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.994167 27751 manager.go:932] Added container: "/kubepods/burstable/podbc22704d9f4dc5d62a8217cfd5c14373/413566b22305750f9a9aa46fbe256c11e75293e80f6e0d4afb7ec9e6afcdee05" (aliases: [k8s_kube-scheduler_kube-scheduler-af867b_kube-system_bc22704d9f4dc5d62a8217cfd5c14373_0 413566b22305750f9a9aa46fbe256c11e75293e80f6e0d4afb7ec9e6afcdee05], namespace: "docker")
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.994298 27751 handler.go:325] Added event &{/kubepods/burstable/podbc22704d9f4dc5d62a8217cfd5c14373/413566b22305750f9a9aa46fbe256c11e75293e80f6e0d4afb7ec9e6afcdee05 2017-11-15 01:58:57.871194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.994331 27751 container.go:409] Start housekeeping for container "/kubepods/burstable/podbc22704d9f4dc5d62a8217cfd5c14373/413566b22305750f9a9aa46fbe256c11e75293e80f6e0d4afb7ec9e6afcdee05"
Nov 15 01:58:57 af867b kubelet[27751]: I1115 01:58:57.996809 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/besteffort/podd76e26fba3bf2bfd215eb29011d55250"
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.005581 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-af867b", UID:"4e0fac5dee63099d647b4d031a37ad7d", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}): type: 'Normal' reason: 'Created' Created container
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.308559 27751 factory.go:112] Using factory "docker" for container "/kubepods/burstable/pod4e0fac5dee63099d647b4d031a37ad7d/8da2c70a27f08a2f062af80b5708e01ac34ce76b42ab4a6eaa0288e2daf8a043"
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.313617 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-af867b", UID:"4e0fac5dee63099d647b4d031a37ad7d", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}): type: 'Normal' reason: 'Started' Started container
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.318645 27751 manager.go:932] Added container: "/kubepods/burstable/pod4e0fac5dee63099d647b4d031a37ad7d/8da2c70a27f08a2f062af80b5708e01ac34ce76b42ab4a6eaa0288e2daf8a043" (aliases: [k8s_kube-apiserver_kube-apiserver-af867b_kube-system_4e0fac5dee63099d647b4d031a37ad7d_0 8da2c70a27f08a2f062af80b5708e01ac34ce76b42ab4a6eaa0288e2daf8a043], namespace: "docker")
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.319491 27751 handler.go:325] Added event &{/kubepods/burstable/pod4e0fac5dee63099d647b4d031a37ad7d/8da2c70a27f08a2f062af80b5708e01ac34ce76b42ab4a6eaa0288e2daf8a043 2017-11-15 01:58:58.130194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.319548 27751 container.go:409] Start housekeeping for container "/kubepods/burstable/pod4e0fac5dee63099d647b4d031a37ad7d/8da2c70a27f08a2f062af80b5708e01ac34ce76b42ab4a6eaa0288e2daf8a043"
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.324705 27751 generic.go:146] GenericPLEG: d76e26fba3bf2bfd215eb29011d55250/d5076ad0e9fb270d1b8c4ff7cdadbf32d6b1e30dc42ae24dfbc5cb01bb5aa934: non-existent -> running
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.324752 27751 generic.go:146] GenericPLEG: 4e0fac5dee63099d647b4d031a37ad7d/8da2c70a27f08a2f062af80b5708e01ac34ce76b42ab4a6eaa0288e2daf8a043: non-existent -> running
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.324763 27751 generic.go:146] GenericPLEG: 4e0fac5dee63099d647b4d031a37ad7d/19b06ca14052c17f92bf03d5316fd7642fe4fee6bf193755575355be3e87a13d: non-existent -> running
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.324774 27751 generic.go:146] GenericPLEG: bc22704d9f4dc5d62a8217cfd5c14373/413566b22305750f9a9aa46fbe256c11e75293e80f6e0d4afb7ec9e6afcdee05: non-existent -> running
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.324785 27751 generic.go:146] GenericPLEG: f49ee4da5c66af63a0b4bcea4f69baf9/272af7e3b2b1a203250d154349fdf77f296d7b7f65ce2c77b6b3a94e53dba356: non-existent -> running
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.327367 27751 kuberuntime_manager.go:833] getSandboxIDByPodUID got sandbox IDs ["d5076ad0e9fb270d1b8c4ff7cdadbf32d6b1e30dc42ae24dfbc5cb01bb5aa934"] for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.328777 27751 generic.go:345] PLEG: Write status for etcd-af867b/kube-system: &container.PodStatus{ID:"d76e26fba3bf2bfd215eb29011d55250", Name:"etcd-af867b", Namespace:"kube-system", IP:"", ContainerStatuses:[]*container.ContainerStatus{}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc4210b2730)}} (err: <nil>)
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.328871 27751 kubelet.go:1871] SyncLoop (PLEG): "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)", event: &pleg.PodLifecycleEvent{ID:"d76e26fba3bf2bfd215eb29011d55250", Type:"ContainerStarted", Data:"d5076ad0e9fb270d1b8c4ff7cdadbf32d6b1e30dc42ae24dfbc5cb01bb5aa934"}
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.329495 27751 kuberuntime_manager.go:833] getSandboxIDByPodUID got sandbox IDs ["19b06ca14052c17f92bf03d5316fd7642fe4fee6bf193755575355be3e87a13d"] for pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.333611 27751 generic.go:345] PLEG: Write status for kube-apiserver-af867b/kube-system: &container.PodStatus{ID:"4e0fac5dee63099d647b4d031a37ad7d", Name:"kube-apiserver-af867b", Namespace:"kube-system", IP:"", ContainerStatuses:[]*container.ContainerStatus{(*container.ContainerStatus)(0xc4200f8b60)}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc421158550)}} (err: <nil>)
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.333697 27751 kubelet.go:1871] SyncLoop (PLEG): "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)", event: &pleg.PodLifecycleEvent{ID:"4e0fac5dee63099d647b4d031a37ad7d", Type:"ContainerStarted", Data:"8da2c70a27f08a2f062af80b5708e01ac34ce76b42ab4a6eaa0288e2daf8a043"}
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.333747 27751 kubelet.go:1871] SyncLoop (PLEG): "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)", event: &pleg.PodLifecycleEvent{ID:"4e0fac5dee63099d647b4d031a37ad7d", Type:"ContainerStarted", Data:"19b06ca14052c17f92bf03d5316fd7642fe4fee6bf193755575355be3e87a13d"}
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.333782 27751 kubelet_pods.go:1284] Generating status for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.333847 27751 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.335324 27751 kuberuntime_manager.go:833] getSandboxIDByPodUID got sandbox IDs ["fe18d337ce42bd3a4d2aacb1349be028274681d8073fbe5d6377946e815fa810"] for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.336722 27751 kubelet_node_status.go:443] Recording NodeHasSufficientDisk event message for node af867b
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.336743 27751 kubelet_node_status.go:443] Recording NodeHasSufficientMemory event message for node af867b
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.336753 27751 kubelet_node_status.go:443] Recording NodeHasNoDiskPressure event message for node af867b
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.336943 27751 kubelet.go:1610] Creating a mirror pod for static pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.336971 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node af867b status is now: NodeHasSufficientMemory
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.336988 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientDisk' Node af867b status is now: NodeHasSufficientDisk
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.337003 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node af867b status is now: NodeHasNoDiskPressure
Nov 15 01:58:58 af867b kubelet[27751]: W1115 01:58:58.339143 27751 status_manager.go:431] Failed to get status for pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)": Get https://10.241.226.117:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-af867b: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:58 af867b kubelet[27751]: E1115 01:58:58.339240 27751 kubelet.go:1612] Failed creating a mirror pod for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)": Post https://10.241.226.117:6443/api/v1/namespaces/kube-system/pods: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.339278 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.341569 27751 generic.go:345] PLEG: Write status for kube-scheduler-af867b/kube-system: &container.PodStatus{ID:"bc22704d9f4dc5d62a8217cfd5c14373", Name:"kube-scheduler-af867b", Namespace:"kube-system", IP:"", ContainerStatuses:[]*container.ContainerStatus{(*container.ContainerStatus)(0xc42066e700)}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc421159d60)}} (err: <nil>)
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.341651 27751 kubelet.go:1871] SyncLoop (PLEG): "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)", event: &pleg.PodLifecycleEvent{ID:"bc22704d9f4dc5d62a8217cfd5c14373", Type:"ContainerStarted", Data:"413566b22305750f9a9aa46fbe256c11e75293e80f6e0d4afb7ec9e6afcdee05"}
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.341685 27751 kubelet_pods.go:1284] Generating status for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.341767 27751 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.343429 27751 kuberuntime_manager.go:833] getSandboxIDByPodUID got sandbox IDs ["439f518d46ec0d596518b7c17c503c52a85dd4da21d8a56ec174d5b6b3b98e03"] for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.344546 27751 kubelet_node_status.go:443] Recording NodeHasSufficientDisk event message for node af867b
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.344571 27751 kubelet_node_status.go:443] Recording NodeHasSufficientMemory event message for node af867b
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.344580 27751 kubelet_node_status.go:443] Recording NodeHasNoDiskPressure event message for node af867b
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.344724 27751 kubelet.go:1610] Creating a mirror pod for static pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.344892 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientDisk' Node af867b status is now: NodeHasSufficientDisk
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.344912 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node af867b status is now: NodeHasSufficientMemory
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.344926 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node af867b status is now: NodeHasNoDiskPressure
Nov 15 01:58:58 af867b kubelet[27751]: W1115 01:58:58.346555 27751 status_manager.go:431] Failed to get status for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)": Get https://10.241.226.117:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-af867b: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:58 af867b kubelet[27751]: E1115 01:58:58.346617 27751 kubelet.go:1612] Failed creating a mirror pod for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)": Post https://10.241.226.117:6443/api/v1/namespaces/kube-system/pods: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.346660 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.348757 27751 generic.go:345] PLEG: Write status for kube-controller-manager-af867b/kube-system: &container.PodStatus{ID:"f49ee4da5c66af63a0b4bcea4f69baf9", Name:"kube-controller-manager-af867b", Namespace:"kube-system", IP:"", ContainerStatuses:[]*container.ContainerStatus{(*container.ContainerStatus)(0xc4200f9180)}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc4211e5090)}} (err: <nil>)
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.348821 27751 kubelet.go:1871] SyncLoop (PLEG): "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)", event: &pleg.PodLifecycleEvent{ID:"f49ee4da5c66af63a0b4bcea4f69baf9", Type:"ContainerStarted", Data:"272af7e3b2b1a203250d154349fdf77f296d7b7f65ce2c77b6b3a94e53dba356"}
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.348853 27751 kubelet_pods.go:1284] Generating status for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.348901 27751 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.350882 27751 kubelet_node_status.go:443] Recording NodeHasSufficientDisk event message for node af867b
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.350900 27751 kubelet_node_status.go:443] Recording NodeHasSufficientMemory event message for node af867b
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.350909 27751 kubelet_node_status.go:443] Recording NodeHasNoDiskPressure event message for node af867b
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.351040 27751 kubelet.go:1610] Creating a mirror pod for static pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.351222 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientDisk' Node af867b status is now: NodeHasSufficientDisk
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.351242 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node af867b status is now: NodeHasSufficientMemory
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.351256 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node af867b status is now: NodeHasNoDiskPressure
Nov 15 01:58:58 af867b kubelet[27751]: E1115 01:58:58.351481 27751 kubelet.go:1612] Failed creating a mirror pod for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)": Post https://10.241.226.117:6443/api/v1/namespaces/kube-system/pods: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.351516 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 01:58:58 af867b kubelet[27751]: W1115 01:58:58.351643 27751 status_manager.go:431] Failed to get status for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)": Get https://10.241.226.117:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-af867b: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.384333 27751 reflector.go:240] Listing and watching *v1.Node from k8s.io/kubernetes/pkg/kubelet/kubelet.go:422
Nov 15 01:58:58 af867b kubelet[27751]: E1115 01:58:58.385242 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.241.226.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Daf867b&resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.385571 27751 reflector.go:240] Listing and watching *v1.Pod from k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.386551 27751 reflector.go:240] Listing and watching *v1.Service from k8s.io/kubernetes/pkg/kubelet/kubelet.go:413
Nov 15 01:58:58 af867b kubelet[27751]: E1115 01:58:58.411993 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.241.226.117:6443/api/v1/pods?fieldSelector=spec.nodeName%3Daf867b&resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.477762 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"etcd-af867b", UID:"d76e26fba3bf2bfd215eb29011d55250", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{etcd}"}): type: 'Normal' reason: 'Created' Created container
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.524042 27751 kubelet.go:1911] SyncLoop (housekeeping, skipped): sources aren't ready yet.
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.611626 27751 request.go:462] Throttling request took 224.966026ms, request: GET:https://10.241.226.117:6443/api/v1/services?resourceVersion=0
Nov 15 01:58:58 af867b kubelet[27751]: E1115 01:58:58.612582 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.241.226.117:6443/api/v1/services?resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.639700 27751 volume_manager.go:366] All volumes are attached and mounted for pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.639987 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:19b06ca14052c17f92bf03d5316fd7642fe4fee6bf193755575355be3e87a13d Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.649341 27751 volume_manager.go:366] All volumes are attached and mounted for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.649486 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:fe18d337ce42bd3a4d2aacb1349be028274681d8073fbe5d6377946e815fa810 Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.651790 27751 volume_manager.go:366] All volumes are attached and mounted for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.651889 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:439f518d46ec0d596518b7c17c503c52a85dd4da21d8a56ec174d5b6b3b98e03 Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.661796 27751 factory.go:112] Using factory "docker" for container "/kubepods/besteffort/podd76e26fba3bf2bfd215eb29011d55250/ab147bb1b65b2001333417bb7654896e6aadb25ce71a8c48c94ae802a2e0197f"
Nov 15 01:58:58 af867b kubelet[27751]: W1115 01:58:58.663948 27751 docker_container.go:202] Deleted previously existing symlink file: "/var/log/pods/d76e26fba3bf2bfd215eb29011d55250/etcd_0.log"
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.665156 27751 manager.go:932] Added container: "/kubepods/besteffort/podd76e26fba3bf2bfd215eb29011d55250/ab147bb1b65b2001333417bb7654896e6aadb25ce71a8c48c94ae802a2e0197f" (aliases: [k8s_etcd_etcd-af867b_kube-system_d76e26fba3bf2bfd215eb29011d55250_0 ab147bb1b65b2001333417bb7654896e6aadb25ce71a8c48c94ae802a2e0197f], namespace: "docker")
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.665292 27751 handler.go:325] Added event &{/kubepods/besteffort/podd76e26fba3bf2bfd215eb29011d55250/ab147bb1b65b2001333417bb7654896e6aadb25ce71a8c48c94ae802a2e0197f 2017-11-15 01:58:58.571194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.665334 27751 container.go:409] Start housekeeping for container "/kubepods/besteffort/podd76e26fba3bf2bfd215eb29011d55250/ab147bb1b65b2001333417bb7654896e6aadb25ce71a8c48c94ae802a2e0197f"
Nov 15 01:58:58 af867b kubelet[27751]: I1115 01:58:58.670879 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"etcd-af867b", UID:"d76e26fba3bf2bfd215eb29011d55250", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{etcd}"}): type: 'Normal' reason: 'Started' Started container
Nov 15 01:58:59 af867b kubelet[27751]: E1115 01:58:59.197618 27751 event.go:209] Unable to write event: 'Post https://10.241.226.117:6443/api/v1/namespaces/default/events: dial tcp 10.241.226.117:6443: getsockopt: connection refused' (may retry after sleeping)
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.352350 27751 generic.go:146] GenericPLEG: d76e26fba3bf2bfd215eb29011d55250/ab147bb1b65b2001333417bb7654896e6aadb25ce71a8c48c94ae802a2e0197f: non-existent -> running
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.353508 27751 kuberuntime_manager.go:833] getSandboxIDByPodUID got sandbox IDs ["d5076ad0e9fb270d1b8c4ff7cdadbf32d6b1e30dc42ae24dfbc5cb01bb5aa934"] for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.357306 27751 generic.go:345] PLEG: Write status for etcd-af867b/kube-system: &container.PodStatus{ID:"d76e26fba3bf2bfd215eb29011d55250", Name:"etcd-af867b", Namespace:"kube-system", IP:"", ContainerStatuses:[]*container.ContainerStatus{(*container.ContainerStatus)(0xc42066f880)}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc4212b1b30)}} (err: <nil>)
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.357373 27751 kubelet_pods.go:1284] Generating status for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.357455 27751 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.358393 27751 kubelet.go:1871] SyncLoop (PLEG): "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)", event: &pleg.PodLifecycleEvent{ID:"d76e26fba3bf2bfd215eb29011d55250", Type:"ContainerStarted", Data:"ab147bb1b65b2001333417bb7654896e6aadb25ce71a8c48c94ae802a2e0197f"}
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.358448 27751 kubelet_pods.go:1284] Generating status for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.358494 27751 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.358516 27751 kubelet_pods.go:1284] Generating status for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.358571 27751 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.358593 27751 kubelet_pods.go:1284] Generating status for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.358631 27751 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.363984 27751 kubelet_node_status.go:443] Recording NodeHasSufficientDisk event message for node af867b
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.364011 27751 kubelet_node_status.go:443] Recording NodeHasSufficientMemory event message for node af867b
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.364025 27751 kubelet_node_status.go:443] Recording NodeHasNoDiskPressure event message for node af867b
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.364213 27751 kubelet.go:1610] Creating a mirror pod for static pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.364498 27751 kubelet_node_status.go:443] Recording NodeHasSufficientDisk event message for node af867b
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.364513 27751 kubelet_node_status.go:443] Recording NodeHasSufficientMemory event message for node af867b
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.364522 27751 kubelet_node_status.go:443] Recording NodeHasNoDiskPressure event message for node af867b
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.364619 27751 status_manager.go:325] Ignoring same status for pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:58 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 01:58:55 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-apiserver State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 01:58:58 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/google_containers/kube-apiserver-amd64:v1.8.3 ImageID:docker-pullable://gcr.io/google_containers/kube-apiserver-amd64@sha256:872e3d4286a8ef4338df59945cb0d64c2622268ceb3e8a2ce7b52243279b02d0 ContainerID:docker://8da2c70a27f08a2f062af80b5708e01ac34ce76b42ab4a6eaa0288e2daf8a043}] QOSClass:Burstable}
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.364818 27751 kubelet.go:1610] Creating a mirror pod for static pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.365012 27751 kubelet_node_status.go:443] Recording NodeHasSufficientDisk event message for node af867b
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.365026 27751 kubelet_node_status.go:443] Recording NodeHasSufficientMemory event message for node af867b
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.365037 27751 kubelet_node_status.go:443] Recording NodeHasNoDiskPressure event message for node af867b
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.365111 27751 status_manager.go:325] Ignoring same status for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:58 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 01:58:55 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-scheduler State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 01:58:57 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/google_containers/kube-scheduler-amd64:v1.8.3 ImageID:docker-pullable://gcr.io/google_containers/kube-scheduler-amd64@sha256:c47b2438bbab28d58e8cbf64b37b7f66d26b000f5c3a31626ee829a4be8fb91e ContainerID:docker://413566b22305750f9a9aa46fbe256c11e75293e80f6e0d4afb7ec9e6afcdee05}] QOSClass:Burstable}
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.365254 27751 kubelet.go:1610] Creating a mirror pod for static pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.365441 27751 kubelet_node_status.go:443] Recording NodeHasSufficientDisk event message for node af867b
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.365466 27751 kubelet_node_status.go:443] Recording NodeHasSufficientMemory event message for node af867b
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.365476 27751 kubelet_node_status.go:443] Recording NodeHasNoDiskPressure event message for node af867b
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.365551 27751 status_manager.go:325] Ignoring same status for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:58 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 01:58:55 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-controller-manager State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 01:58:57 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/google_containers/kube-controller-manager-amd64:v1.8.3 ImageID:docker-pullable://gcr.io/google_containers/kube-controller-manager-amd64@sha256:b6b633e3e107761d38fceb200f01bf552c51f65e3524b0aafc1a7710afff07be ContainerID:docker://272af7e3b2b1a203250d154349fdf77f296d7b7f65ce2c77b6b3a94e53dba356}] QOSClass:Burstable}
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.365672 27751 kubelet.go:1610] Creating a mirror pod for static pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.365771 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node af867b status is now: NodeHasNoDiskPressure
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.365792 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientDisk' Node af867b status is now: NodeHasSufficientDisk
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.365805 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node af867b status is now: NodeHasSufficientMemory
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.365819 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node af867b status is now: NodeHasNoDiskPressure
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.365867 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientDisk' Node af867b status is now: NodeHasSufficientDisk
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.365882 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node af867b status is now: NodeHasSufficientMemory
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.365895 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node af867b status is now: NodeHasNoDiskPressure
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.365906 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientDisk' Node af867b status is now: NodeHasSufficientDisk
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.365917 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node af867b status is now: NodeHasSufficientMemory
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.365929 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node af867b status is now: NodeHasNoDiskPressure
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.365940 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientDisk' Node af867b status is now: NodeHasSufficientDisk
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.365952 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node af867b status is now: NodeHasSufficientMemory
Nov 15 01:58:59 af867b kubelet[27751]: E1115 01:58:59.366022 27751 kubelet.go:1612] Failed creating a mirror pod for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)": Post https://10.241.226.117:6443/api/v1/namespaces/kube-system/pods: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.366062 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 01:58:59 af867b kubelet[27751]: E1115 01:58:59.366146 27751 kubelet.go:1612] Failed creating a mirror pod for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)": Post https://10.241.226.117:6443/api/v1/namespaces/kube-system/pods: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.366175 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 01:58:59 af867b kubelet[27751]: E1115 01:58:59.366239 27751 kubelet.go:1612] Failed creating a mirror pod for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)": Post https://10.241.226.117:6443/api/v1/namespaces/kube-system/pods: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.366265 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.389829 27751 reflector.go:240] Listing and watching *v1.Node from k8s.io/kubernetes/pkg/kubelet/kubelet.go:422
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.412890 27751 reflector.go:240] Listing and watching *v1.Pod from k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47
Nov 15 01:58:59 af867b kubelet[27751]: E1115 01:58:59.414085 27751 kubelet.go:1612] Failed creating a mirror pod for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)": Post https://10.241.226.117:6443/api/v1/namespaces/kube-system/pods: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.414170 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.611825 27751 request.go:462] Throttling request took 245.912534ms, request: GET:https://10.241.226.117:6443/api/v1/namespaces/kube-system/pods/etcd-af867b
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.612846 27751 reflector.go:240] Listing and watching *v1.Service from k8s.io/kubernetes/pkg/kubelet/kubelet.go:413
Nov 15 01:58:59 af867b kubelet[27751]: W1115 01:58:59.613045 27751 status_manager.go:431] Failed to get status for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)": Get https://10.241.226.117:6443/api/v1/namespaces/kube-system/pods/etcd-af867b: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.666597 27751 volume_manager.go:366] All volumes are attached and mounted for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.666813 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:fe18d337ce42bd3a4d2aacb1349be028274681d8073fbe5d6377946e815fa810 Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.666977 27751 volume_manager.go:366] All volumes are attached and mounted for pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.667062 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:19b06ca14052c17f92bf03d5316fd7642fe4fee6bf193755575355be3e87a13d Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.667151 27751 volume_manager.go:366] All volumes are attached and mounted for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.667215 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:d5076ad0e9fb270d1b8c4ff7cdadbf32d6b1e30dc42ae24dfbc5cb01bb5aa934 Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.714387 27751 volume_manager.go:366] All volumes are attached and mounted for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.714563 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:439f518d46ec0d596518b7c17c503c52a85dd4da21d8a56ec174d5b6b3b98e03 Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 01:58:59 af867b kubelet[27751]: I1115 01:58:59.811633 27751 request.go:462] Throttling request took 421.650806ms, request: GET:https://10.241.226.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Daf867b&resourceVersion=0
Nov 15 01:58:59 af867b kubelet[27751]: E1115 01:58:59.813031 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.241.226.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Daf867b&resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:59:00 af867b kubelet[27751]: I1115 01:59:00.011855 27751 request.go:462] Throttling request took 598.805681ms, request: GET:https://10.241.226.117:6443/api/v1/pods?fieldSelector=spec.nodeName%3Daf867b&resourceVersion=0
Nov 15 01:59:00 af867b kubelet[27751]: E1115 01:59:00.018890 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.241.226.117:6443/api/v1/pods?fieldSelector=spec.nodeName%3Daf867b&resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:59:00 af867b kubelet[27751]: I1115 01:59:00.211647 27751 request.go:462] Throttling request took 598.663516ms, request: GET:https://10.241.226.117:6443/api/v1/services?resourceVersion=0
Nov 15 01:59:00 af867b kubelet[27751]: E1115 01:59:00.212865 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.241.226.117:6443/api/v1/services?resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:59:00 af867b kubelet[27751]: I1115 01:59:00.360737 27751 kubelet_pods.go:1284] Generating status for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 01:59:00 af867b kubelet[27751]: I1115 01:59:00.360816 27751 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Nov 15 01:59:00 af867b kubelet[27751]: I1115 01:59:00.363201 27751 kubelet_node_status.go:443] Recording NodeHasSufficientDisk event message for node af867b
Nov 15 01:59:00 af867b kubelet[27751]: I1115 01:59:00.363229 27751 kubelet_node_status.go:443] Recording NodeHasSufficientMemory event message for node af867b
Nov 15 01:59:00 af867b kubelet[27751]: I1115 01:59:00.363241 27751 kubelet_node_status.go:443] Recording NodeHasNoDiskPressure event message for node af867b
Nov 15 01:59:00 af867b kubelet[27751]: I1115 01:59:00.363363 27751 status_manager.go:325] Ignoring same status for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:59 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 01:58:55 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:etcd State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 01:58:58 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/google_containers/etcd-amd64:3.0.17 ImageID:docker-pullable://gcr.io/google_containers/etcd-amd64@sha256:d83d3545e06fb035db8512e33bd44afb55dea007a3abd7b17742d3ac6d235940 ContainerID:docker://ab147bb1b65b2001333417bb7654896e6aadb25ce71a8c48c94ae802a2e0197f}] QOSClass:BestEffort}
Nov 15 01:59:00 af867b kubelet[27751]: I1115 01:59:00.363522 27751 kubelet.go:1610] Creating a mirror pod for static pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 01:59:00 af867b kubelet[27751]: I1115 01:59:00.363648 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node af867b status is now: NodeHasNoDiskPressure
Nov 15 01:59:00 af867b kubelet[27751]: I1115 01:59:00.363668 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientDisk' Node af867b status is now: NodeHasSufficientDisk
Nov 15 01:59:00 af867b kubelet[27751]: I1115 01:59:00.363680 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node af867b status is now: NodeHasSufficientMemory
Nov 15 01:59:00 af867b kubelet[27751]: I1115 01:59:00.415814 27751 request.go:462] Throttling request took 52.162252ms, request: POST:https://10.241.226.117:6443/api/v1/namespaces/kube-system/pods
Nov 15 01:59:00 af867b kubelet[27751]: E1115 01:59:00.416836 27751 kubelet.go:1612] Failed creating a mirror pod for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)": Post https://10.241.226.117:6443/api/v1/namespaces/kube-system/pods: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:59:00 af867b kubelet[27751]: I1115 01:59:00.416888 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 01:59:00 af867b kubelet[27751]: I1115 01:59:00.523701 27751 kubelet.go:1911] SyncLoop (housekeeping, skipped): sources aren't ready yet.
Nov 15 01:59:00 af867b kubelet[27751]: I1115 01:59:00.717163 27751 volume_manager.go:366] All volumes are attached and mounted for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 01:59:00 af867b kubelet[27751]: I1115 01:59:00.717299 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:d5076ad0e9fb270d1b8c4ff7cdadbf32d6b1e30dc42ae24dfbc5cb01bb5aa934 Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 01:59:00 af867b kubelet[27751]: I1115 01:59:00.796935 27751 eviction_manager.go:221] eviction manager: synchronize housekeeping
Nov 15 01:59:00 af867b kubelet[27751]: E1115 01:59:00.797006 27751 eviction_manager.go:238] eviction manager: unexpected err: failed to get node info: node 'af867b' not found
Nov 15 01:59:00 af867b kubelet[27751]: W1115 01:59:00.802145 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 01:59:00 af867b kubelet[27751]: I1115 01:59:00.802294 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 01:59:00 af867b kubelet[27751]: E1115 01:59:00.802323 27751 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 01:59:00 af867b kubelet[27751]: I1115 01:59:00.813766 27751 reflector.go:240] Listing and watching *v1.Node from k8s.io/kubernetes/pkg/kubelet/kubelet.go:422
Nov 15 01:59:00 af867b kubelet[27751]: E1115 01:59:00.814565 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.241.226.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Daf867b&resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:59:01 af867b kubelet[27751]: I1115 01:59:01.019131 27751 reflector.go:240] Listing and watching *v1.Pod from k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47
Nov 15 01:59:01 af867b kubelet[27751]: E1115 01:59:01.020532 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.241.226.117:6443/api/v1/pods?fieldSelector=spec.nodeName%3Daf867b&resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:59:01 af867b kubelet[27751]: I1115 01:59:01.213083 27751 reflector.go:240] Listing and watching *v1.Service from k8s.io/kubernetes/pkg/kubelet/kubelet.go:413
Nov 15 01:59:01 af867b kubelet[27751]: E1115 01:59:01.214349 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.241.226.117:6443/api/v1/services?resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:59:01 af867b kubelet[27751]: I1115 01:59:01.814779 27751 reflector.go:240] Listing and watching *v1.Node from k8s.io/kubernetes/pkg/kubelet/kubelet.go:422
Nov 15 01:59:01 af867b kubelet[27751]: E1115 01:59:01.816108 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.241.226.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Daf867b&resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:59:02 af867b kubelet[27751]: I1115 01:59:02.020726 27751 reflector.go:240] Listing and watching *v1.Pod from k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47
Nov 15 01:59:02 af867b kubelet[27751]: E1115 01:59:02.022074 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.241.226.117:6443/api/v1/pods?fieldSelector=spec.nodeName%3Daf867b&resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:59:02 af867b kubelet[27751]: I1115 01:59:02.214812 27751 reflector.go:240] Listing and watching *v1.Service from k8s.io/kubernetes/pkg/kubelet/kubelet.go:413
Nov 15 01:59:02 af867b kubelet[27751]: E1115 01:59:02.215854 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.241.226.117:6443/api/v1/services?resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:59:02 af867b kubelet[27751]: I1115 01:59:02.527843 27751 kubelet.go:1911] SyncLoop (housekeeping, skipped): sources aren't ready yet.
Nov 15 01:59:02 af867b kubelet[27751]: I1115 01:59:02.816796 27751 reflector.go:240] Listing and watching *v1.Node from k8s.io/kubernetes/pkg/kubelet/kubelet.go:422
Nov 15 01:59:02 af867b kubelet[27751]: E1115 01:59:02.817699 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.241.226.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Daf867b&resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:59:03 af867b kubelet[27751]: I1115 01:59:03.022798 27751 reflector.go:240] Listing and watching *v1.Pod from k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47
Nov 15 01:59:03 af867b kubelet[27751]: E1115 01:59:03.024213 27751 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.241.226.117:6443/api/v1/pods?fieldSelector=spec.nodeName%3Daf867b&resourceVersion=0: dial tcp 10.241.226.117:6443: getsockopt: connection refused
Nov 15 01:59:03 af867b kubelet[27751]: I1115 01:59:03.216799 27751 reflector.go:240] Listing and watching *v1.Service from k8s.io/kubernetes/pkg/kubelet/kubelet.go:413
Nov 15 01:59:03 af867b kubelet[27751]: I1115 01:59:03.264780 27751 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Nov 15 01:59:03 af867b kubelet[27751]: I1115 01:59:03.268197 27751 kubelet_node_status.go:443] Recording NodeHasSufficientDisk event message for node af867b
Nov 15 01:59:03 af867b kubelet[27751]: I1115 01:59:03.268231 27751 kubelet_node_status.go:443] Recording NodeHasSufficientMemory event message for node af867b
Nov 15 01:59:03 af867b kubelet[27751]: I1115 01:59:03.268242 27751 kubelet_node_status.go:443] Recording NodeHasNoDiskPressure event message for node af867b
Nov 15 01:59:03 af867b kubelet[27751]: I1115 01:59:03.268258 27751 kubelet_node_status.go:83] Attempting to register node af867b
Nov 15 01:59:03 af867b kubelet[27751]: I1115 01:59:03.268794 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node af867b status is now: NodeHasSufficientMemory
Nov 15 01:59:03 af867b kubelet[27751]: I1115 01:59:03.268827 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientDisk' Node af867b status is now: NodeHasSufficientDisk
Nov 15 01:59:03 af867b kubelet[27751]: I1115 01:59:03.268842 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node af867b status is now: NodeHasNoDiskPressure
Nov 15 01:59:03 af867b kubelet[27751]: I1115 01:59:03.817934 27751 reflector.go:240] Listing and watching *v1.Node from k8s.io/kubernetes/pkg/kubelet/kubelet.go:422
Nov 15 01:59:04 af867b kubelet[27751]: I1115 01:59:04.024437 27751 reflector.go:240] Listing and watching *v1.Pod from k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47
Nov 15 01:59:04 af867b kubelet[27751]: I1115 01:59:04.027388 27751 config.go:282] Setting pods for source api
Nov 15 01:59:04 af867b kubelet[27751]: I1115 01:59:04.027423 27751 kubelet.go:1837] SyncLoop (ADD, "api"): ""
Nov 15 01:59:04 af867b kubelet[27751]: I1115 01:59:04.523445 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 01:59:04 af867b kubelet[27751]: I1115 01:59:04.531383 27751 kubelet_pods.go:1704] Orphaned pod "42253414d7c5f285b756a2243a4df250" found, removing pod cgroups
Nov 15 01:59:04 af867b kubelet[27751]: I1115 01:59:04.531405 27751 kubelet_pods.go:1704] Orphaned pod "9d6dd5e700f66143c0b1a919b27a8a33" found, removing pod cgroups
Nov 15 01:59:04 af867b kubelet[27751]: I1115 01:59:04.531417 27751 kubelet_pods.go:1704] Orphaned pod "b69bc062-c962-11e7-83ed-c6b053eac242" found, removing pod cgroups
Nov 15 01:59:04 af867b kubelet[27751]: I1115 01:59:04.536066 27751 manager.go:989] Destroyed container: "/kubepods/burstable/pod42253414d7c5f285b756a2243a4df250" (aliases: [], namespace: "")
Nov 15 01:59:04 af867b kubelet[27751]: I1115 01:59:04.536094 27751 handler.go:325] Added event &{/kubepods/burstable/pod42253414d7c5f285b756a2243a4df250 2017-11-15 01:59:04.536085777 +0000 UTC containerDeletion {<nil>}}
Nov 15 01:59:04 af867b kubelet[27751]: I1115 01:59:04.536123 27751 manager.go:989] Destroyed container: "/kubepods/burstable/pod9d6dd5e700f66143c0b1a919b27a8a33" (aliases: [], namespace: "")
Nov 15 01:59:04 af867b kubelet[27751]: I1115 01:59:04.536131 27751 handler.go:325] Added event &{/kubepods/burstable/pod9d6dd5e700f66143c0b1a919b27a8a33 2017-11-15 01:59:04.536128762 +0000 UTC containerDeletion {<nil>}}
Nov 15 01:59:04 af867b kubelet[27751]: I1115 01:59:04.536142 27751 manager.go:989] Destroyed container: "/kubepods/besteffort/podb69bc062-c962-11e7-83ed-c6b053eac242" (aliases: [], namespace: "")
Nov 15 01:59:04 af867b kubelet[27751]: I1115 01:59:04.536149 27751 handler.go:325] Added event &{/kubepods/besteffort/podb69bc062-c962-11e7-83ed-c6b053eac242 2017-11-15 01:59:04.536147125 +0000 UTC containerDeletion {<nil>}}
Nov 15 01:59:05 af867b kubelet[27751]: W1115 01:59:05.803524 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 01:59:05 af867b kubelet[27751]: I1115 01:59:05.803667 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 01:59:05 af867b kubelet[27751]: E1115 01:59:05.803690 27751 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 01:59:06 af867b kubelet[27751]: I1115 01:59:06.523502 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 01:59:07 af867b kubelet[27751]: I1115 01:59:07.284683 27751 kubelet_node_status.go:86] Successfully registered node af867b
Nov 15 01:59:07 af867b kubelet[27751]: E1115 01:59:07.287582 27751 kubelet_node_status.go:390] Error updating node status, will retry: error getting node "af867b": nodes "af867b" not found
Nov 15 01:59:08 af867b kubelet[27751]: I1115 01:59:08.523470 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 01:59:09 af867b kubelet[27751]: I1115 01:59:09.439469 27751 request.go:462] Throttling request took 187.939948ms, request: PATCH:https://10.241.226.117:6443/api/v1/namespaces/default/events/af867b.14f71fc4e507c284
Nov 15 01:59:09 af867b kubelet[27751]: I1115 01:59:09.639478 27751 request.go:462] Throttling request took 194.092755ms, request: PATCH:https://10.241.226.117:6443/api/v1/namespaces/default/events/af867b.14f71fc4e5080948
Nov 15 01:59:09 af867b kubelet[27751]: I1115 01:59:09.839485 27751 request.go:462] Throttling request took 194.999002ms, request: PATCH:https://10.241.226.117:6443/api/v1/namespaces/default/events/af867b.14f71fc4e5082bf8
Nov 15 01:59:10 af867b kubelet[27751]: I1115 01:59:10.039470 27751 request.go:462] Throttling request took 195.012906ms, request: PATCH:https://10.241.226.117:6443/api/v1/namespaces/default/events/af867b.14f71fc4e507c284
Nov 15 01:59:10 af867b kubelet[27751]: I1115 01:59:10.239456 27751 request.go:462] Throttling request took 194.13137ms, request: PATCH:https://10.241.226.117:6443/api/v1/namespaces/default/events/af867b.14f71fc4e5080948
Nov 15 01:59:10 af867b kubelet[27751]: I1115 01:59:10.439517 27751 request.go:462] Throttling request took 194.257764ms, request: PATCH:https://10.241.226.117:6443/api/v1/namespaces/default/events/af867b.14f71fc4e5082bf8
Nov 15 01:59:10 af867b kubelet[27751]: I1115 01:59:10.523555 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 01:59:10 af867b kubelet[27751]: I1115 01:59:10.639485 27751 request.go:462] Throttling request took 191.386186ms, request: PATCH:https://10.241.226.117:6443/api/v1/namespaces/default/events/af867b.14f71fc4e507c284
Nov 15 01:59:10 af867b kubelet[27751]: I1115 01:59:10.797215 27751 eviction_manager.go:221] eviction manager: synchronize housekeeping
Nov 15 01:59:10 af867b kubelet[27751]: W1115 01:59:10.816282 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 01:59:10 af867b kubelet[27751]: I1115 01:59:10.816546 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 01:59:10 af867b kubelet[27751]: E1115 01:59:10.816587 27751 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 01:59:10 af867b kubelet[27751]: I1115 01:59:10.839395 27751 request.go:462] Throttling request took 195.08274ms, request: PATCH:https://10.241.226.117:6443/api/v1/namespaces/default/events/af867b.14f71fc4e5080948
Nov 15 01:59:10 af867b kubelet[27751]: I1115 01:59:10.844679 27751 helpers.go:871] eviction manager: observations: signal=imagefs.available, available: 41099776Ki, capacity: 45Gi, time: 2017-11-15 01:59:08.748924209 +0000 UTC
Nov 15 01:59:10 af867b kubelet[27751]: I1115 01:59:10.844745 27751 helpers.go:873] eviction manager: observations: signal=allocatableMemory.available, available: 7274672Ki, capacity: 7393360Ki
Nov 15 01:59:10 af867b kubelet[27751]: I1115 01:59:10.844754 27751 helpers.go:873] eviction manager: observations: signal=allocatableNodeFs.available, available: 9624040228, capacity: 10198Mi
Nov 15 01:59:10 af867b kubelet[27751]: I1115 01:59:10.844762 27751 helpers.go:871] eviction manager: observations: signal=memory.available, available: 6666096Ki, capacity: 7393360Ki, time: 2017-11-15 01:59:08.748924209 +0000 UTC
Nov 15 01:59:10 af867b kubelet[27751]: I1115 01:59:10.844773 27751 helpers.go:871] eviction manager: observations: signal=nodefs.available, available: 7419708Ki, capacity: 10198Mi, time: 2017-11-15 01:59:08.748924209 +0000 UTC
Nov 15 01:59:10 af867b kubelet[27751]: I1115 01:59:10.844783 27751 helpers.go:871] eviction manager: observations: signal=nodefs.inodesFree, available: 10384879, capacity: 10208Ki, time: 2017-11-15 01:59:08.748924209 +0000 UTC
Nov 15 01:59:10 af867b kubelet[27751]: I1115 01:59:10.844816 27751 eviction_manager.go:325] eviction manager: no resources are starved
Nov 15 01:59:11 af867b kubelet[27751]: I1115 01:59:11.039476 27751 request.go:462] Throttling request took 188.874013ms, request: PATCH:https://10.241.226.117:6443/api/v1/namespaces/default/events/af867b.14f71fc4e5082bf8
Nov 15 01:59:11 af867b kubelet[27751]: I1115 01:59:11.239544 27751 request.go:462] Throttling request took 192.363192ms, request: PATCH:https://10.241.226.117:6443/api/v1/namespaces/default/events/af867b.14f71fc4e5082bf8
Nov 15 01:59:11 af867b kubelet[27751]: I1115 01:59:11.439482 27751 request.go:462] Throttling request took 190.719733ms, request: PATCH:https://10.241.226.117:6443/api/v1/namespaces/default/events/af867b.14f71fc4e507c284
Nov 15 01:59:11 af867b kubelet[27751]: I1115 01:59:11.639505 27751 request.go:462] Throttling request took 195.587175ms, request: PATCH:https://10.241.226.117:6443/api/v1/namespaces/default/events/af867b.14f71fc4e5080948
Nov 15 01:59:11 af867b kubelet[27751]: I1115 01:59:11.839487 27751 request.go:462] Throttling request took 193.862011ms, request: PATCH:https://10.241.226.117:6443/api/v1/namespaces/default/events/af867b.14f71fc4e507c284
Nov 15 01:59:12 af867b kubelet[27751]: I1115 01:59:12.039538 27751 request.go:462] Throttling request took 193.968694ms, request: PATCH:https://10.241.226.117:6443/api/v1/namespaces/default/events/af867b.14f71fc4e5080948
Nov 15 01:59:12 af867b kubelet[27751]: I1115 01:59:12.239498 27751 request.go:462] Throttling request took 190.282991ms, request: POST:https://10.241.226.117:6443/api/v1/namespaces/kube-system/events
Nov 15 01:59:12 af867b kubelet[27751]: I1115 01:59:12.439482 27751 request.go:462] Throttling request took 196.077185ms, request: POST:https://10.241.226.117:6443/api/v1/namespaces/kube-system/events
Nov 15 01:59:12 af867b kubelet[27751]: I1115 01:59:12.523872 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 01:59:12 af867b kubelet[27751]: I1115 01:59:12.639550 27751 request.go:462] Throttling request took 195.008881ms, request: POST:https://10.241.226.117:6443/api/v1/namespaces/kube-system/events
Nov 15 01:59:12 af867b kubelet[27751]: I1115 01:59:12.839499 27751 request.go:462] Throttling request took 193.958228ms, request: POST:https://10.241.226.117:6443/api/v1/namespaces/kube-system/events
Nov 15 01:59:13 af867b kubelet[27751]: I1115 01:59:13.039565 27751 request.go:462] Throttling request took 196.053303ms, request: POST:https://10.241.226.117:6443/api/v1/namespaces/kube-system/events
Nov 15 01:59:13 af867b kubelet[27751]: I1115 01:59:13.239565 27751 request.go:462] Throttling request took 194.031519ms, request: POST:https://10.241.226.117:6443/api/v1/namespaces/kube-system/events
Nov 15 01:59:13 af867b kubelet[27751]: I1115 01:59:13.439491 27751 request.go:462] Throttling request took 195.025944ms, request: POST:https://10.241.226.117:6443/api/v1/namespaces/kube-system/events
Nov 15 01:59:13 af867b kubelet[27751]: I1115 01:59:13.617341 27751 prober.go:160] HTTP-Probe Host: https://127.0.0.1, Port: 6443, Path: /healthz
Nov 15 01:59:13 af867b kubelet[27751]: I1115 01:59:13.617375 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 01:59:13 af867b kubelet[27751]: I1115 01:59:13.630228 27751 http.go:96] Probe succeeded for https://127.0.0.1:6443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Type:[text/plain; charset=utf-8] Content-Length:[2] Date:[Wed, 15 Nov 2017 01:59:13 GMT]] 0xc42186b360 2 [] false false map[] 0xc420a31300 0xc42107d760}
Nov 15 01:59:13 af867b kubelet[27751]: I1115 01:59:13.630288 27751 prober.go:113] Liveness probe for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d):kube-apiserver" succeeded
Nov 15 01:59:13 af867b kubelet[27751]: I1115 01:59:13.639469 27751 request.go:462] Throttling request took 195.415143ms, request: POST:https://10.241.226.117:6443/api/v1/namespaces/kube-system/events
Nov 15 01:59:13 af867b kubelet[27751]: I1115 01:59:13.839469 27751 request.go:462] Throttling request took 196.54814ms, request: POST:https://10.241.226.117:6443/api/v1/namespaces/kube-system/events
Nov 15 01:59:14 af867b kubelet[27751]: I1115 01:59:14.039464 27751 request.go:462] Throttling request took 197.024593ms, request: POST:https://10.241.226.117:6443/api/v1/namespaces/kube-system/events
Nov 15 01:59:14 af867b kubelet[27751]: I1115 01:59:14.239463 27751 request.go:462] Throttling request took 195.99837ms, request: POST:https://10.241.226.117:6443/api/v1/namespaces/kube-system/events
Nov 15 01:59:14 af867b kubelet[27751]: I1115 01:59:14.439455 27751 request.go:462] Throttling request took 194.821127ms, request: POST:https://10.241.226.117:6443/api/v1/namespaces/kube-system/events
Nov 15 01:59:14 af867b kubelet[27751]: I1115 01:59:14.524403 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 01:59:14 af867b kubelet[27751]: I1115 01:59:14.642803 27751 request.go:462] Throttling request took 198.765503ms, request: POST:https://10.241.226.117:6443/api/v1/namespaces/kube-system/events
Nov 15 01:59:14 af867b kubelet[27751]: I1115 01:59:14.839498 27751 request.go:462] Throttling request took 192.497221ms, request: POST:https://10.241.226.117:6443/api/v1/namespaces/kube-system/events
Nov 15 01:59:15 af867b kubelet[27751]: I1115 01:59:15.039479 27751 request.go:462] Throttling request took 196.12028ms, request: POST:https://10.241.226.117:6443/api/v1/namespaces/kube-system/events
Nov 15 01:59:15 af867b kubelet[27751]: I1115 01:59:15.240168 27751 request.go:462] Throttling request took 197.134005ms, request: POST:https://10.241.226.117:6443/api/v1/namespaces/kube-system/events
Nov 15 01:59:15 af867b kubelet[27751]: I1115 01:59:15.439452 27751 request.go:462] Throttling request took 193.069837ms, request: POST:https://10.241.226.117:6443/api/v1/namespaces/kube-system/events
Nov 15 01:59:15 af867b kubelet[27751]: I1115 01:59:15.640872 27751 request.go:462] Throttling request took 198.081111ms, request: POST:https://10.241.226.117:6443/api/v1/namespaces/kube-system/events
Nov 15 01:59:15 af867b kubelet[27751]: W1115 01:59:15.817942 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 01:59:15 af867b kubelet[27751]: I1115 01:59:15.818694 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 01:59:15 af867b kubelet[27751]: E1115 01:59:15.818759 27751 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 01:59:15 af867b kubelet[27751]: I1115 01:59:15.839455 27751 request.go:462] Throttling request took 185.97892ms, request: POST:https://10.241.226.117:6443/api/v1/namespaces/kube-system/events
Nov 15 01:59:16 af867b kubelet[27751]: I1115 01:59:16.039492 27751 request.go:462] Throttling request took 195.094844ms, request: POST:https://10.241.226.117:6443/api/v1/namespaces/kube-system/events
Nov 15 01:59:16 af867b kubelet[27751]: I1115 01:59:16.239457 27751 request.go:462] Throttling request took 194.804621ms, request: POST:https://10.241.226.117:6443/api/v1/namespaces/kube-system/events
Nov 15 01:59:16 af867b kubelet[27751]: I1115 01:59:16.439400 27751 request.go:462] Throttling request took 195.383661ms, request: POST:https://10.241.226.117:6443/api/v1/namespaces/kube-system/events
Nov 15 01:59:16 af867b kubelet[27751]: I1115 01:59:16.523461 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 01:59:17 af867b kubelet[27751]: I1115 01:59:17.506213 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10252, Path: /healthz
Nov 15 01:59:17 af867b kubelet[27751]: I1115 01:59:17.506264 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 01:59:17 af867b kubelet[27751]: I1115 01:59:17.510625 27751 http.go:96] Probe succeeded for http://127.0.0.1:10252/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 01:59:17 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc420cbbe80 2 [] true false map[] 0xc420afca00 <nil>}
Nov 15 01:59:17 af867b kubelet[27751]: I1115 01:59:17.510703 27751 prober.go:113] Liveness probe for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9):kube-controller-manager" succeeded
Nov 15 01:59:17 af867b kubelet[27751]: I1115 01:59:17.681687 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10251, Path: /healthz
Nov 15 01:59:17 af867b kubelet[27751]: I1115 01:59:17.681733 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 01:59:17 af867b kubelet[27751]: I1115 01:59:17.683669 27751 http.go:96] Probe succeeded for http://127.0.0.1:10251/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 01:59:17 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc420aa7940 2 [] true false map[] 0xc420d4a700 <nil>}
Nov 15 01:59:17 af867b kubelet[27751]: I1115 01:59:17.683713 27751 prober.go:113] Liveness probe for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373):kube-scheduler" succeeded
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.525427 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.591992 27751 config.go:282] Setting pods for source api
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.592201 27751 config.go:404] Receiving a new pod "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.592264 27751 kubelet.go:1837] SyncLoop (ADD, "api"): "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.592389 27751 kubelet_pods.go:1284] Generating status for "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.592666 27751 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.600132 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.600352 27751 factory.go:116] Factory "docker" was unable to handle container "/kubepods/besteffort/pod9729c03a-c9a8-11e7-89f4-c6b053eac242"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.600379 27751 factory.go:105] Error trying to work out if we can handle /kubepods/besteffort/pod9729c03a-c9a8-11e7-89f4-c6b053eac242: /kubepods/besteffort/pod9729c03a-c9a8-11e7-89f4-c6b053eac242 not handled by systemd handler
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.600423 27751 factory.go:116] Factory "systemd" was unable to handle container "/kubepods/besteffort/pod9729c03a-c9a8-11e7-89f4-c6b053eac242"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.600461 27751 factory.go:112] Using factory "raw" for container "/kubepods/besteffort/pod9729c03a-c9a8-11e7-89f4-c6b053eac242"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.600685 27751 manager.go:932] Added container: "/kubepods/besteffort/pod9729c03a-c9a8-11e7-89f4-c6b053eac242" (aliases: [], namespace: "")
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.600819 27751 handler.go:325] Added event &{/kubepods/besteffort/pod9729c03a-c9a8-11e7-89f4-c6b053eac242 2017-11-15 01:59:18.597194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.600866 27751 container.go:409] Start housekeeping for container "/kubepods/besteffort/pod9729c03a-c9a8-11e7-89f4-c6b053eac242"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.614966 27751 status_manager.go:451] Status for pod "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)" updated successfully: (1, {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:59:18 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:59:18 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 01:59:18 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/kube-proxy-amd64:v1.8.3 ImageID: ContainerID:}] QOSClass:BestEffort})
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.615109 27751 config.go:282] Setting pods for source api
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.615370 27751 kubelet.go:1850] SyncLoop (RECONCILE, "api"): "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.776553 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/9729c03a-c9a8-11e7-89f4-c6b053eac242-kube-proxy") pod "kube-proxy-nnsjf" (UID: "9729c03a-c9a8-11e7-89f4-c6b053eac242")
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.776640 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/9729c03a-c9a8-11e7-89f4-c6b053eac242-xtables-lock") pod "kube-proxy-nnsjf" (UID: "9729c03a-c9a8-11e7-89f4-c6b053eac242")
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.776780 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-gqhfs" (UniqueName: "kubernetes.io/secret/9729c03a-c9a8-11e7-89f4-c6b053eac242-kube-proxy-token-gqhfs") pod "kube-proxy-nnsjf" (UID: "9729c03a-c9a8-11e7-89f4-c6b053eac242")
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.877079 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/9729c03a-c9a8-11e7-89f4-c6b053eac242-kube-proxy") pod "kube-proxy-nnsjf" (UID: "9729c03a-c9a8-11e7-89f4-c6b053eac242")
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.877156 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/9729c03a-c9a8-11e7-89f4-c6b053eac242-xtables-lock") pod "kube-proxy-nnsjf" (UID: "9729c03a-c9a8-11e7-89f4-c6b053eac242")
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.877214 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "kube-proxy-token-gqhfs" (UniqueName: "kubernetes.io/secret/9729c03a-c9a8-11e7-89f4-c6b053eac242-kube-proxy-token-gqhfs") pod "kube-proxy-nnsjf" (UID: "9729c03a-c9a8-11e7-89f4-c6b053eac242")
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.877242 27751 configmap.go:187] Setting up volume kube-proxy for pod 9729c03a-c9a8-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/9729c03a-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~configmap/kube-proxy
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.877266 27751 secret.go:186] Setting up volume kube-proxy-token-gqhfs for pod 9729c03a-c9a8-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/9729c03a-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/kube-proxy-token-gqhfs
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.877362 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/9729c03a-c9a8-11e7-89f4-c6b053eac242-xtables-lock") pod "kube-proxy-nnsjf" (UID: "9729c03a-c9a8-11e7-89f4-c6b053eac242")
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.877773 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-nnsjf", UID:"9729c03a-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"287", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "xtables-lock"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.879861 27751 empty_dir.go:264] pod 9729c03a-c9a8-11e7-89f4-c6b053eac242: mounting tmpfs for volume wrapped_kube-proxy-token-gqhfs
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.879884 27751 mount_linux.go:135] Mounting cmd (systemd-run) with arguments ([--description=Kubernetes transient mount for /var/lib/kubelet/pods/9729c03a-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/kube-proxy-token-gqhfs --scope -- mount -t tmpfs tmpfs /var/lib/kubelet/pods/9729c03a-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/kube-proxy-token-gqhfs])
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.895791 27751 configmap.go:218] Received configMap kube-system/kube-proxy containing (1) pieces of data, 407 total bytes
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.895871 27751 atomic_writer.go:145] pod kube-system/kube-proxy-nnsjf volume kube-proxy: write required for target directory /var/lib/kubelet/pods/9729c03a-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~configmap/kube-proxy
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.895977 27751 atomic_writer.go:160] pod kube-system/kube-proxy-nnsjf volume kube-proxy: performed write of new data to ts data directory: /var/lib/kubelet/pods/9729c03a-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~configmap/kube-proxy/..119811_15_11_01_59_18.128125473
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.896083 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/9729c03a-c9a8-11e7-89f4-c6b053eac242-kube-proxy") pod "kube-proxy-nnsjf" (UID: "9729c03a-c9a8-11e7-89f4-c6b053eac242")
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.896121 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-nnsjf", UID:"9729c03a-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"287", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "kube-proxy"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.900747 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/run-28837.scope"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.900775 27751 factory.go:105] Error trying to work out if we can handle /system.slice/run-28837.scope: /system.slice/run-28837.scope not handled by systemd handler
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.900782 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/run-28837.scope"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.900790 27751 factory.go:112] Using factory "raw" for container "/system.slice/run-28837.scope"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.900939 27751 manager.go:932] Added container: "/system.slice/run-28837.scope" (aliases: [], namespace: "")
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901034 27751 handler.go:325] Added event &{/system.slice/run-28837.scope 2017-11-15 01:59:18.887194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901061 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-29c1400e918b28b9982788b4c8b07c46b1767efcd8b96000e44bc78da639994b.mount"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901072 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-29c1400e918b28b9982788b4c8b07c46b1767efcd8b96000e44bc78da639994b.mount", but ignoring.
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901082 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-29c1400e918b28b9982788b4c8b07c46b1767efcd8b96000e44bc78da639994b.mount"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901090 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-9d97c593c8a0b1ef2442048619ba4466fad52c408e686c9285a48450f3a6669f.mount"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901097 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-9d97c593c8a0b1ef2442048619ba4466fad52c408e686c9285a48450f3a6669f.mount", but ignoring.
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901107 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-9d97c593c8a0b1ef2442048619ba4466fad52c408e686c9285a48450f3a6669f.mount"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901115 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-faffc5ce0cf0da39c68377dff0a611eb436ee74dfce11692aecd1e723311ef80.mount"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901122 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-faffc5ce0cf0da39c68377dff0a611eb436ee74dfce11692aecd1e723311ef80.mount", but ignoring.
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901130 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-faffc5ce0cf0da39c68377dff0a611eb436ee74dfce11692aecd1e723311ef80.mount"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901138 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-fe18d337ce42bd3a4d2aacb1349be028274681d8073fbe5d6377946e815fa810-shm.mount"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901147 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-fe18d337ce42bd3a4d2aacb1349be028274681d8073fbe5d6377946e815fa810-shm.mount", but ignoring.
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901155 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-fe18d337ce42bd3a4d2aacb1349be028274681d8073fbe5d6377946e815fa810-shm.mount"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901194 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-82408ec054eef4c0bce3519663bc7e5cbaff7a4fea1d8b7490fa3cc5a74bac60.mount"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901202 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-82408ec054eef4c0bce3519663bc7e5cbaff7a4fea1d8b7490fa3cc5a74bac60.mount", but ignoring.
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901211 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-82408ec054eef4c0bce3519663bc7e5cbaff7a4fea1d8b7490fa3cc5a74bac60.mount"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901219 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-19b06ca14052c17f92bf03d5316fd7642fe4fee6bf193755575355be3e87a13d-shm.mount"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901227 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-19b06ca14052c17f92bf03d5316fd7642fe4fee6bf193755575355be3e87a13d-shm.mount", but ignoring.
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901236 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-19b06ca14052c17f92bf03d5316fd7642fe4fee6bf193755575355be3e87a13d-shm.mount"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901244 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-d5076ad0e9fb270d1b8c4ff7cdadbf32d6b1e30dc42ae24dfbc5cb01bb5aa934-shm.mount"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901251 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-d5076ad0e9fb270d1b8c4ff7cdadbf32d6b1e30dc42ae24dfbc5cb01bb5aa934-shm.mount", but ignoring.
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901259 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-d5076ad0e9fb270d1b8c4ff7cdadbf32d6b1e30dc42ae24dfbc5cb01bb5aa934-shm.mount"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901267 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-cd27b64716d8032aa3ea595ec121a222dc749f2f903149a780eb711106fb0c53.mount"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901285 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-cd27b64716d8032aa3ea595ec121a222dc749f2f903149a780eb711106fb0c53.mount", but ignoring.
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901293 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-cd27b64716d8032aa3ea595ec121a222dc749f2f903149a780eb711106fb0c53.mount"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901301 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-8c430fd9b74591760bf92ca717db0989293743473d027f75b1dfded4d0661504.mount"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901308 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-8c430fd9b74591760bf92ca717db0989293743473d027f75b1dfded4d0661504.mount", but ignoring.
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901317 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-8c430fd9b74591760bf92ca717db0989293743473d027f75b1dfded4d0661504.mount"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901325 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-439f518d46ec0d596518b7c17c503c52a85dd4da21d8a56ec174d5b6b3b98e03-shm.mount"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901333 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-439f518d46ec0d596518b7c17c503c52a85dd4da21d8a56ec174d5b6b3b98e03-shm.mount", but ignoring.
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901354 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-439f518d46ec0d596518b7c17c503c52a85dd4da21d8a56ec174d5b6b3b98e03-shm.mount"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901361 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-23d8892c53722cfb4d3aa6c5b126ad2674a7b8041b4873d30b98faefad6f5b44.mount"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901369 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-23d8892c53722cfb4d3aa6c5b126ad2674a7b8041b4873d30b98faefad6f5b44.mount", but ignoring.
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901378 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-23d8892c53722cfb4d3aa6c5b126ad2674a7b8041b4873d30b98faefad6f5b44.mount"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901388 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-6ddc96a63fb65ccd1743152170b39274e0eb3e4ccc4948bf2c0f5a6130ee9ce7.mount"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901394 27751 container.go:409] Start housekeeping for container "/system.slice/run-28837.scope"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901396 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-6ddc96a63fb65ccd1743152170b39274e0eb3e4ccc4948bf2c0f5a6130ee9ce7.mount", but ignoring.
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.901970 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-6ddc96a63fb65ccd1743152170b39274e0eb3e4ccc4948bf2c0f5a6130ee9ce7.mount"
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.911799 27751 manager.go:989] Destroyed container: "/system.slice/run-28837.scope" (aliases: [], namespace: "")
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.911816 27751 handler.go:325] Added event &{/system.slice/run-28837.scope 2017-11-15 01:59:18.911812017 +0000 UTC containerDeletion {<nil>}}
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.913500 27751 secret.go:217] Received secret kube-system/kube-proxy-token-gqhfs containing (3) pieces of data, 1904 total bytes
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.913564 27751 atomic_writer.go:145] pod kube-system/kube-proxy-nnsjf volume kube-proxy-token-gqhfs: write required for target directory /var/lib/kubelet/pods/9729c03a-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/kube-proxy-token-gqhfs
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.913642 27751 atomic_writer.go:160] pod kube-system/kube-proxy-nnsjf volume kube-proxy-token-gqhfs: performed write of new data to ts data directory: /var/lib/kubelet/pods/9729c03a-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/kube-proxy-token-gqhfs/..119811_15_11_01_59_18.005299980
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.913748 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "kube-proxy-token-gqhfs" (UniqueName: "kubernetes.io/secret/9729c03a-c9a8-11e7-89f4-c6b053eac242-kube-proxy-token-gqhfs") pod "kube-proxy-nnsjf" (UID: "9729c03a-c9a8-11e7-89f4-c6b053eac242")
Nov 15 01:59:18 af867b kubelet[27751]: I1115 01:59:18.913780 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-nnsjf", UID:"9729c03a-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"287", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "kube-proxy-token-gqhfs"
Nov 15 01:59:19 af867b kubelet[27751]: I1115 01:59:19.200464 27751 volume_manager.go:366] All volumes are attached and mounted for pod "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 01:59:19 af867b kubelet[27751]: I1115 01:59:19.200584 27751 kuberuntime_manager.go:370] No sandbox for pod "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)" can be found. Need to start a new one
Nov 15 01:59:19 af867b kubelet[27751]: I1115 01:59:19.200613 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:true CreateSandbox:true SandboxID: Attempt:0 NextInitContainerToStart:nil ContainersToStart:[0] ContainersToKill:map[]} for pod "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 01:59:19 af867b kubelet[27751]: I1115 01:59:19.200801 27751 kuberuntime_manager.go:565] SyncPod received new pod "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)", will create a sandbox for it
Nov 15 01:59:19 af867b kubelet[27751]: I1115 01:59:19.200832 27751 kuberuntime_manager.go:574] Stopping PodSandbox for "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)", will start new one
Nov 15 01:59:19 af867b kubelet[27751]: I1115 01:59:19.200878 27751 kuberuntime_manager.go:626] Creating sandbox for pod "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 01:59:19 af867b kubelet[27751]: I1115 01:59:19.205243 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/besteffort/pod9729c03a-c9a8-11e7-89f4-c6b053eac242"
Nov 15 01:59:19 af867b kubelet[27751]: I1115 01:59:19.205269 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/besteffort/pod9729c03a-c9a8-11e7-89f4-c6b053eac242"
Nov 15 01:59:19 af867b kubelet[27751]: I1115 01:59:19.646297 27751 factory.go:112] Using factory "docker" for container "/kubepods/besteffort/pod9729c03a-c9a8-11e7-89f4-c6b053eac242/91510886f3beb621e5d04309d502352ff78392e46405f21633802da4f7047069"
Nov 15 01:59:19 af867b kubelet[27751]: I1115 01:59:19.647984 27751 docker_sandbox.go:691] Will attempt to re-write config file /var/lib/docker/containers/91510886f3beb621e5d04309d502352ff78392e46405f21633802da4f7047069/resolv.conf with:
Nov 15 01:59:19 af867b kubelet[27751]: [nameserver 10.196.65.209 search opcwlaas.oraclecloud.internal. opcwlaas.oraclecloud.internal.]
Nov 15 01:59:19 af867b kubelet[27751]: I1115 01:59:19.648846 27751 manager.go:932] Added container: "/kubepods/besteffort/pod9729c03a-c9a8-11e7-89f4-c6b053eac242/91510886f3beb621e5d04309d502352ff78392e46405f21633802da4f7047069" (aliases: [k8s_POD_kube-proxy-nnsjf_kube-system_9729c03a-c9a8-11e7-89f4-c6b053eac242_0 91510886f3beb621e5d04309d502352ff78392e46405f21633802da4f7047069], namespace: "docker")
Nov 15 01:59:19 af867b kubelet[27751]: I1115 01:59:19.648974 27751 handler.go:325] Added event &{/kubepods/besteffort/pod9729c03a-c9a8-11e7-89f4-c6b053eac242/91510886f3beb621e5d04309d502352ff78392e46405f21633802da4f7047069 2017-11-15 01:59:19.585194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:59:19 af867b kubelet[27751]: I1115 01:59:19.649014 27751 container.go:409] Start housekeeping for container "/kubepods/besteffort/pod9729c03a-c9a8-11e7-89f4-c6b053eac242/91510886f3beb621e5d04309d502352ff78392e46405f21633802da4f7047069"
Nov 15 01:59:19 af867b kubelet[27751]: I1115 01:59:19.651424 27751 kuberuntime_manager.go:640] Created PodSandbox "91510886f3beb621e5d04309d502352ff78392e46405f21633802da4f7047069" for pod "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 01:59:19 af867b kubelet[27751]: I1115 01:59:19.655159 27751 kuberuntime_manager.go:705] Creating container &Container{Name:kube-proxy,Image:gcr.io/google_containers/kube-proxy-amd64:v1.8.3,Command:[/usr/local/bin/kube-proxy --kubeconfig=/var/lib/kube-proxy/kubeconfig.conf],Args:[],WorkingDir:,Ports:[],Env:[],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[{kube-proxy false /var/lib/kube-proxy <nil>} {xtables-lock false /run/xtables.lock <nil>} {kube-proxy-token-gqhfs true /var/run/secrets/kubernetes.io/serviceaccount <nil>}],LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[],TerminationMessagePolicy:File,} in pod kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)
Nov 15 01:59:19 af867b kubelet[27751]: I1115 01:59:19.657471 27751 kuberuntime_container.go:100] Generating ref for container kube-proxy: &v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-nnsjf", UID:"9729c03a-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"287", FieldPath:"spec.containers{kube-proxy}"}
Nov 15 01:59:19 af867b kubelet[27751]: I1115 01:59:19.657517 27751 container_manager_linux.go:634] Calling devicePluginHandler AllocateDevices
Nov 15 01:59:19 af867b kubelet[27751]: I1115 01:59:19.657557 27751 kubelet_pods.go:123] container: kube-system/kube-proxy-nnsjf/kube-proxy podIP: "10.196.65.210" creating hosts mount: true
Nov 15 01:59:19 af867b kubelet[27751]: I1115 01:59:19.659933 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-nnsjf", UID:"9729c03a-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"287", FieldPath:"spec.containers{kube-proxy}"}): type: 'Normal' reason: 'Pulled' Container image "gcr.io/google_containers/kube-proxy-amd64:v1.8.3" already present on machine
Nov 15 01:59:19 af867b kubelet[27751]: I1115 01:59:19.662148 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/besteffort/pod9729c03a-c9a8-11e7-89f4-c6b053eac242"
Nov 15 01:59:19 af867b kubelet[27751]: I1115 01:59:19.983686 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-nnsjf", UID:"9729c03a-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"287", FieldPath:"spec.containers{kube-proxy}"}): type: 'Normal' reason: 'Created' Created container
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.158025 27751 factory.go:112] Using factory "docker" for container "/kubepods/besteffort/pod9729c03a-c9a8-11e7-89f4-c6b053eac242/7ea397ec6048e25ce044b9edba43fe0ef2ed54803c9f3516c7b6e780d2c60a34"
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.159890 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-nnsjf", UID:"9729c03a-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"287", FieldPath:"spec.containers{kube-proxy}"}): type: 'Normal' reason: 'Started' Started container
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.164369 27751 manager.go:932] Added container: "/kubepods/besteffort/pod9729c03a-c9a8-11e7-89f4-c6b053eac242/7ea397ec6048e25ce044b9edba43fe0ef2ed54803c9f3516c7b6e780d2c60a34" (aliases: [k8s_kube-proxy_kube-proxy-nnsjf_kube-system_9729c03a-c9a8-11e7-89f4-c6b053eac242_0 7ea397ec6048e25ce044b9edba43fe0ef2ed54803c9f3516c7b6e780d2c60a34], namespace: "docker")
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.164555 27751 handler.go:325] Added event &{/kubepods/besteffort/pod9729c03a-c9a8-11e7-89f4-c6b053eac242/7ea397ec6048e25ce044b9edba43fe0ef2ed54803c9f3516c7b6e780d2c60a34 2017-11-15 01:59:20.076194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.164597 27751 container.go:409] Start housekeeping for container "/kubepods/besteffort/pod9729c03a-c9a8-11e7-89f4-c6b053eac242/7ea397ec6048e25ce044b9edba43fe0ef2ed54803c9f3516c7b6e780d2c60a34"
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.458359 27751 generic.go:146] GenericPLEG: 9729c03a-c9a8-11e7-89f4-c6b053eac242/7ea397ec6048e25ce044b9edba43fe0ef2ed54803c9f3516c7b6e780d2c60a34: non-existent -> running
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.458385 27751 generic.go:146] GenericPLEG: 9729c03a-c9a8-11e7-89f4-c6b053eac242/91510886f3beb621e5d04309d502352ff78392e46405f21633802da4f7047069: non-existent -> running
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.460320 27751 kuberuntime_manager.go:833] getSandboxIDByPodUID got sandbox IDs ["91510886f3beb621e5d04309d502352ff78392e46405f21633802da4f7047069"] for pod "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.464727 27751 generic.go:345] PLEG: Write status for kube-proxy-nnsjf/kube-system: &container.PodStatus{ID:"9729c03a-c9a8-11e7-89f4-c6b053eac242", Name:"kube-proxy-nnsjf", Namespace:"kube-system", IP:"", ContainerStatuses:[]*container.ContainerStatus{(*container.ContainerStatus)(0xc4214881c0)}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc4212b15e0)}} (err: <nil>)
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.464798 27751 kubelet.go:1871] SyncLoop (PLEG): "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)", event: &pleg.PodLifecycleEvent{ID:"9729c03a-c9a8-11e7-89f4-c6b053eac242", Type:"ContainerStarted", Data:"7ea397ec6048e25ce044b9edba43fe0ef2ed54803c9f3516c7b6e780d2c60a34"}
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.464838 27751 kubelet.go:1871] SyncLoop (PLEG): "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)", event: &pleg.PodLifecycleEvent{ID:"9729c03a-c9a8-11e7-89f4-c6b053eac242", Type:"ContainerStarted", Data:"91510886f3beb621e5d04309d502352ff78392e46405f21633802da4f7047069"}
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.464883 27751 kubelet_pods.go:1284] Generating status for "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.465135 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.473355 27751 status_manager.go:451] Status for pod "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)" updated successfully: (2, {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:59:18 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:59:20 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 01:59:18 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-proxy State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 01:59:20 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/google_containers/kube-proxy-amd64:v1.8.3 ImageID:docker-pullable://gcr.io/google_containers/kube-proxy-amd64@sha256:63210bc9690144d41126a646caf03a3d76ddc6d06b8bad119d468193c3e90c24 ContainerID:docker://7ea397ec6048e25ce044b9edba43fe0ef2ed54803c9f3516c7b6e780d2c60a34}] QOSClass:BestEffort})
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.473573 27751 config.go:282] Setting pods for source api
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.473907 27751 kubelet.go:1850] SyncLoop (RECONCILE, "api"): "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.484552 27751 secret.go:186] Setting up volume kube-proxy-token-gqhfs for pod 9729c03a-c9a8-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/9729c03a-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/kube-proxy-token-gqhfs
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.485242 27751 configmap.go:187] Setting up volume kube-proxy for pod 9729c03a-c9a8-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/9729c03a-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~configmap/kube-proxy
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.488387 27751 configmap.go:218] Received configMap kube-system/kube-proxy containing (1) pieces of data, 407 total bytes
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.488492 27751 atomic_writer.go:142] pod kube-system/kube-proxy-nnsjf volume kube-proxy: no update required for target directory /var/lib/kubelet/pods/9729c03a-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~configmap/kube-proxy
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.488584 27751 secret.go:217] Received secret kube-system/kube-proxy-token-gqhfs containing (3) pieces of data, 1904 total bytes
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.488699 27751 atomic_writer.go:142] pod kube-system/kube-proxy-nnsjf volume kube-proxy-token-gqhfs: no update required for target directory /var/lib/kubelet/pods/9729c03a-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/kube-proxy-token-gqhfs
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.523514 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.549604 27751 factory.go:116] Factory "docker" was unable to handle container "/kubepods/besteffort/pod9729c03a-c9a8-11e7-89f4-c6b053eac242/7ea397ec6048e25ce044b9edba43fe0ef2ed54803c9f3516c7b6e780d2c60a34/kube-proxy"
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.549627 27751 factory.go:105] Error trying to work out if we can handle /kubepods/besteffort/pod9729c03a-c9a8-11e7-89f4-c6b053eac242/7ea397ec6048e25ce044b9edba43fe0ef2ed54803c9f3516c7b6e780d2c60a34/kube-proxy: /kubepods/besteffort/pod9729c03a-c9a8-11e7-89f4-c6b053eac242/7ea397ec6048e25ce044b9edba43fe0ef2ed54803c9f3516c7b6e780d2c60a34/kube-proxy not handled by systemd handler
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.549634 27751 factory.go:116] Factory "systemd" was unable to handle container "/kubepods/besteffort/pod9729c03a-c9a8-11e7-89f4-c6b053eac242/7ea397ec6048e25ce044b9edba43fe0ef2ed54803c9f3516c7b6e780d2c60a34/kube-proxy"
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.549645 27751 factory.go:112] Using factory "raw" for container "/kubepods/besteffort/pod9729c03a-c9a8-11e7-89f4-c6b053eac242/7ea397ec6048e25ce044b9edba43fe0ef2ed54803c9f3516c7b6e780d2c60a34/kube-proxy"
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.549844 27751 manager.go:932] Added container: "/kubepods/besteffort/pod9729c03a-c9a8-11e7-89f4-c6b053eac242/7ea397ec6048e25ce044b9edba43fe0ef2ed54803c9f3516c7b6e780d2c60a34/kube-proxy" (aliases: [], namespace: "")
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.549964 27751 handler.go:325] Added event &{/kubepods/besteffort/pod9729c03a-c9a8-11e7-89f4-c6b053eac242/7ea397ec6048e25ce044b9edba43fe0ef2ed54803c9f3516c7b6e780d2c60a34/kube-proxy 2017-11-15 01:59:20.548194711 +0000 UTC containerCreation {<nil>}}
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.549997 27751 container.go:409] Start housekeeping for container "/kubepods/besteffort/pod9729c03a-c9a8-11e7-89f4-c6b053eac242/7ea397ec6048e25ce044b9edba43fe0ef2ed54803c9f3516c7b6e780d2c60a34/kube-proxy"
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.765395 27751 volume_manager.go:366] All volumes are attached and mounted for pod "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.765536 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:91510886f3beb621e5d04309d502352ff78392e46405f21633802da4f7047069 Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 01:59:20 af867b kubelet[27751]: W1115 01:59:20.819991 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.820550 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 01:59:20 af867b kubelet[27751]: E1115 01:59:20.820579 27751 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.844977 27751 eviction_manager.go:221] eviction manager: synchronize housekeeping
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.898283 27751 helpers.go:871] eviction manager: observations: signal=imagefs.available, available: 41099776Ki, capacity: 45Gi, time: 2017-11-15 01:59:08.748924209 +0000 UTC
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.898344 27751 helpers.go:873] eviction manager: observations: signal=allocatableMemory.available, available: 7095676Ki, capacity: 7393360Ki
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.898357 27751 helpers.go:873] eviction manager: observations: signal=allocatableNodeFs.available, available: 9624040228, capacity: 10198Mi
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.898377 27751 helpers.go:871] eviction manager: observations: signal=memory.available, available: 6666096Ki, capacity: 7393360Ki, time: 2017-11-15 01:59:08.748924209 +0000 UTC
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.898393 27751 helpers.go:871] eviction manager: observations: signal=nodefs.available, available: 7419708Ki, capacity: 10198Mi, time: 2017-11-15 01:59:08.748924209 +0000 UTC
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.898406 27751 helpers.go:871] eviction manager: observations: signal=nodefs.inodesFree, available: 10384879, capacity: 10208Ki, time: 2017-11-15 01:59:08.748924209 +0000 UTC
Nov 15 01:59:20 af867b kubelet[27751]: I1115 01:59:20.898438 27751 eviction_manager.go:325] eviction manager: no resources are starved
Nov 15 01:59:21 af867b kubelet[27751]: I1115 01:59:21.471105 27751 kubelet_pods.go:1284] Generating status for "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 01:59:21 af867b kubelet[27751]: I1115 01:59:21.471583 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 01:59:21 af867b kubelet[27751]: I1115 01:59:21.484123 27751 status_manager.go:451] Status for pod "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)" updated successfully: (3, {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:59:18 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:59:20 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:59:20 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 01:59:18 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-proxy State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 01:59:20 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/google_containers/kube-proxy-amd64:v1.8.3 ImageID:docker-pullable://gcr.io/google_containers/kube-proxy-amd64@sha256:63210bc9690144d41126a646caf03a3d76ddc6d06b8bad119d468193c3e90c24 ContainerID:docker://7ea397ec6048e25ce044b9edba43fe0ef2ed54803c9f3516c7b6e780d2c60a34}] QOSClass:BestEffort})
Nov 15 01:59:21 af867b kubelet[27751]: I1115 01:59:21.484525 27751 config.go:282] Setting pods for source api
Nov 15 01:59:21 af867b kubelet[27751]: I1115 01:59:21.487641 27751 kubelet.go:1850] SyncLoop (RECONCILE, "api"): "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 01:59:21 af867b kubelet[27751]: I1115 01:59:21.489760 27751 configmap.go:187] Setting up volume kube-proxy for pod 9729c03a-c9a8-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/9729c03a-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~configmap/kube-proxy
Nov 15 01:59:21 af867b kubelet[27751]: I1115 01:59:21.489962 27751 secret.go:186] Setting up volume kube-proxy-token-gqhfs for pod 9729c03a-c9a8-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/9729c03a-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/kube-proxy-token-gqhfs
Nov 15 01:59:21 af867b kubelet[27751]: I1115 01:59:21.492438 27751 configmap.go:218] Received configMap kube-system/kube-proxy containing (1) pieces of data, 407 total bytes
Nov 15 01:59:21 af867b kubelet[27751]: I1115 01:59:21.492539 27751 secret.go:217] Received secret kube-system/kube-proxy-token-gqhfs containing (3) pieces of data, 1904 total bytes
Nov 15 01:59:21 af867b kubelet[27751]: I1115 01:59:21.492737 27751 atomic_writer.go:142] pod kube-system/kube-proxy-nnsjf volume kube-proxy-token-gqhfs: no update required for target directory /var/lib/kubelet/pods/9729c03a-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/kube-proxy-token-gqhfs
Nov 15 01:59:21 af867b kubelet[27751]: I1115 01:59:21.492548 27751 atomic_writer.go:142] pod kube-system/kube-proxy-nnsjf volume kube-proxy: no update required for target directory /var/lib/kubelet/pods/9729c03a-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~configmap/kube-proxy
Nov 15 01:59:21 af867b kubelet[27751]: I1115 01:59:21.772037 27751 volume_manager.go:366] All volumes are attached and mounted for pod "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 01:59:21 af867b kubelet[27751]: I1115 01:59:21.772223 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:91510886f3beb621e5d04309d502352ff78392e46405f21633802da4f7047069 Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 01:59:22 af867b kubelet[27751]: I1115 01:59:22.523519 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 01:59:22 af867b kubelet[27751]: I1115 01:59:22.960004 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 2379, Path: /health
Nov 15 01:59:22 af867b kubelet[27751]: I1115 01:59:22.960066 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 01:59:23 af867b kubelet[27751]: I1115 01:59:23.461824 27751 http.go:96] Probe succeeded for http://127.0.0.1:2379/health, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 01:59:23 GMT] Content-Length:[18] Content-Type:[text/plain; charset=utf-8]] 0xc421528c60 18 [] true false map[] 0xc421179500 <nil>}
Nov 15 01:59:23 af867b kubelet[27751]: I1115 01:59:23.461900 27751 prober.go:113] Liveness probe for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250):etcd" succeeded
Nov 15 01:59:23 af867b kubelet[27751]: I1115 01:59:23.617422 27751 prober.go:160] HTTP-Probe Host: https://127.0.0.1, Port: 6443, Path: /healthz
Nov 15 01:59:23 af867b kubelet[27751]: I1115 01:59:23.617473 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 01:59:23 af867b kubelet[27751]: I1115 01:59:23.625211 27751 http.go:96] Probe succeeded for https://127.0.0.1:6443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Type:[text/plain; charset=utf-8] Content-Length:[2] Date:[Wed, 15 Nov 2017 01:59:23 GMT]] 0xc420f3e640 2 [] false false map[] 0xc421179700 0xc42179cb00}
Nov 15 01:59:23 af867b kubelet[27751]: I1115 01:59:23.625301 27751 prober.go:113] Liveness probe for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d):kube-apiserver" succeeded
Nov 15 01:59:24 af867b kubelet[27751]: I1115 01:59:24.523463 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 01:59:25 af867b kubelet[27751]: W1115 01:59:25.821983 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 01:59:25 af867b kubelet[27751]: I1115 01:59:25.823149 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 01:59:25 af867b kubelet[27751]: E1115 01:59:25.823186 27751 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 01:59:26 af867b kubelet[27751]: I1115 01:59:26.523545 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 01:59:27 af867b kubelet[27751]: I1115 01:59:27.506003 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10252, Path: /healthz
Nov 15 01:59:27 af867b kubelet[27751]: I1115 01:59:27.506027 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 01:59:27 af867b kubelet[27751]: I1115 01:59:27.508388 27751 http.go:96] Probe succeeded for http://127.0.0.1:10252/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 01:59:27 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc4201a0500 2 [] true false map[] 0xc420d4a300 <nil>}
Nov 15 01:59:27 af867b kubelet[27751]: I1115 01:59:27.508425 27751 prober.go:113] Liveness probe for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9):kube-controller-manager" succeeded
Nov 15 01:59:27 af867b kubelet[27751]: I1115 01:59:27.681677 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10251, Path: /healthz
Nov 15 01:59:27 af867b kubelet[27751]: I1115 01:59:27.681746 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 01:59:27 af867b kubelet[27751]: I1115 01:59:27.683459 27751 http.go:96] Probe succeeded for http://127.0.0.1:10251/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 01:59:27 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc4201a0a20 2 [] true false map[] 0xc420d4a400 <nil>}
Nov 15 01:59:27 af867b kubelet[27751]: I1115 01:59:27.683501 27751 prober.go:113] Liveness probe for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373):kube-scheduler" succeeded
Nov 15 01:59:28 af867b kubelet[27751]: I1115 01:59:28.523440 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 01:59:30 af867b kubelet[27751]: I1115 01:59:30.523527 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 01:59:30 af867b kubelet[27751]: W1115 01:59:30.824684 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 01:59:30 af867b kubelet[27751]: I1115 01:59:30.825042 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 01:59:30 af867b kubelet[27751]: E1115 01:59:30.825065 27751 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 01:59:30 af867b kubelet[27751]: I1115 01:59:30.898653 27751 eviction_manager.go:221] eviction manager: synchronize housekeeping
Nov 15 01:59:30 af867b kubelet[27751]: I1115 01:59:30.967184 27751 helpers.go:871] eviction manager: observations: signal=nodefs.available, available: 7419108Ki, capacity: 10198Mi, time: 2017-11-15 01:59:24.804361047 +0000 UTC
Nov 15 01:59:30 af867b kubelet[27751]: I1115 01:59:30.967236 27751 helpers.go:871] eviction manager: observations: signal=nodefs.inodesFree, available: 10384825, capacity: 10208Ki, time: 2017-11-15 01:59:24.804361047 +0000 UTC
Nov 15 01:59:30 af867b kubelet[27751]: I1115 01:59:30.967248 27751 helpers.go:871] eviction manager: observations: signal=imagefs.available, available: 41082368Ki, capacity: 45Gi, time: 2017-11-15 01:59:24.804361047 +0000 UTC
Nov 15 01:59:30 af867b kubelet[27751]: I1115 01:59:30.967257 27751 helpers.go:873] eviction manager: observations: signal=allocatableMemory.available, available: 7047332Ki, capacity: 7393360Ki
Nov 15 01:59:30 af867b kubelet[27751]: I1115 01:59:30.967265 27751 helpers.go:873] eviction manager: observations: signal=allocatableNodeFs.available, available: 9624040228, capacity: 10198Mi
Nov 15 01:59:30 af867b kubelet[27751]: I1115 01:59:30.967273 27751 helpers.go:871] eviction manager: observations: signal=memory.available, available: 6625680Ki, capacity: 7393360Ki, time: 2017-11-15 01:59:24.804361047 +0000 UTC
Nov 15 01:59:30 af867b kubelet[27751]: I1115 01:59:30.967316 27751 eviction_manager.go:325] eviction manager: no resources are starved
Nov 15 01:59:32 af867b kubelet[27751]: I1115 01:59:32.523973 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 01:59:32 af867b kubelet[27751]: I1115 01:59:32.959962 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 2379, Path: /health
Nov 15 01:59:32 af867b kubelet[27751]: I1115 01:59:32.960024 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 01:59:33 af867b kubelet[27751]: I1115 01:59:33.462606 27751 http.go:96] Probe succeeded for http://127.0.0.1:2379/health, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 01:59:33 GMT] Content-Length:[18] Content-Type:[text/plain; charset=utf-8]] 0xc420b4d540 18 [] true false map[] 0xc420d4a100 <nil>}
Nov 15 01:59:33 af867b kubelet[27751]: I1115 01:59:33.462671 27751 prober.go:113] Liveness probe for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250):etcd" succeeded
Nov 15 01:59:33 af867b kubelet[27751]: I1115 01:59:33.617335 27751 prober.go:160] HTTP-Probe Host: https://127.0.0.1, Port: 6443, Path: /healthz
Nov 15 01:59:33 af867b kubelet[27751]: I1115 01:59:33.617370 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 01:59:33 af867b kubelet[27751]: I1115 01:59:33.623899 27751 http.go:96] Probe succeeded for https://127.0.0.1:6443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Type:[text/plain; charset=utf-8] Content-Length:[2] Date:[Wed, 15 Nov 2017 01:59:33 GMT]] 0xc42017f960 2 [] false false map[] 0xc4200dcd00 0xc4212f4370}
Nov 15 01:59:33 af867b kubelet[27751]: I1115 01:59:33.623958 27751 prober.go:113] Liveness probe for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d):kube-apiserver" succeeded
Nov 15 01:59:34 af867b kubelet[27751]: I1115 01:59:34.523457 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 01:59:35 af867b kubelet[27751]: W1115 01:59:35.826324 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 01:59:35 af867b kubelet[27751]: I1115 01:59:35.826482 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 01:59:35 af867b kubelet[27751]: E1115 01:59:35.826509 27751 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 01:59:36 af867b kubelet[27751]: I1115 01:59:36.523471 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 01:59:37 af867b kubelet[27751]: I1115 01:59:37.506112 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10252, Path: /healthz
Nov 15 01:59:37 af867b kubelet[27751]: I1115 01:59:37.506150 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 01:59:37 af867b kubelet[27751]: I1115 01:59:37.507596 27751 http.go:96] Probe succeeded for http://127.0.0.1:10252/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 01:59:37 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc4214c6420 2 [] true false map[] 0xc420431b00 <nil>}
Nov 15 01:59:37 af867b kubelet[27751]: I1115 01:59:37.507642 27751 prober.go:113] Liveness probe for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9):kube-controller-manager" succeeded
Nov 15 01:59:37 af867b kubelet[27751]: I1115 01:59:37.681696 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10251, Path: /healthz
Nov 15 01:59:37 af867b kubelet[27751]: I1115 01:59:37.681770 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 01:59:37 af867b kubelet[27751]: I1115 01:59:37.683183 27751 http.go:96] Probe succeeded for http://127.0.0.1:10251/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 01:59:37 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc4215283e0 2 [] true false map[] 0xc420ee2100 <nil>}
Nov 15 01:59:37 af867b kubelet[27751]: I1115 01:59:37.683236 27751 prober.go:113] Liveness probe for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373):kube-scheduler" succeeded
Nov 15 01:59:38 af867b kubelet[27751]: I1115 01:59:38.523457 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 01:59:40 af867b kubelet[27751]: I1115 01:59:40.523523 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 01:59:40 af867b kubelet[27751]: W1115 01:59:40.827678 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 01:59:40 af867b kubelet[27751]: I1115 01:59:40.827859 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 01:59:40 af867b kubelet[27751]: E1115 01:59:40.827884 27751 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 01:59:40 af867b kubelet[27751]: I1115 01:59:40.967493 27751 eviction_manager.go:221] eviction manager: synchronize housekeeping
Nov 15 01:59:41 af867b kubelet[27751]: I1115 01:59:41.011291 27751 helpers.go:871] eviction manager: observations: signal=memory.available, available: 6621280Ki, capacity: 7393360Ki, time: 2017-11-15 01:59:36.700965635 +0000 UTC
Nov 15 01:59:41 af867b kubelet[27751]: I1115 01:59:41.011334 27751 helpers.go:871] eviction manager: observations: signal=nodefs.available, available: 7419912Ki, capacity: 10198Mi, time: 2017-11-15 01:59:36.700965635 +0000 UTC
Nov 15 01:59:41 af867b kubelet[27751]: I1115 01:59:41.011345 27751 helpers.go:871] eviction manager: observations: signal=nodefs.inodesFree, available: 10384825, capacity: 10208Ki, time: 2017-11-15 01:59:36.700965635 +0000 UTC
Nov 15 01:59:41 af867b kubelet[27751]: I1115 01:59:41.011354 27751 helpers.go:871] eviction manager: observations: signal=imagefs.available, available: 41082368Ki, capacity: 45Gi, time: 2017-11-15 01:59:36.700965635 +0000 UTC
Nov 15 01:59:41 af867b kubelet[27751]: I1115 01:59:41.011363 27751 helpers.go:873] eviction manager: observations: signal=allocatableMemory.available, available: 7044832Ki, capacity: 7393360Ki
Nov 15 01:59:41 af867b kubelet[27751]: I1115 01:59:41.011369 27751 helpers.go:873] eviction manager: observations: signal=allocatableNodeFs.available, available: 9624040228, capacity: 10198Mi
Nov 15 01:59:41 af867b kubelet[27751]: I1115 01:59:41.011389 27751 eviction_manager.go:325] eviction manager: no resources are starved
Nov 15 01:59:42 af867b kubelet[27751]: I1115 01:59:42.523518 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 01:59:42 af867b kubelet[27751]: I1115 01:59:42.959945 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 2379, Path: /health
Nov 15 01:59:42 af867b kubelet[27751]: I1115 01:59:42.960001 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 01:59:43 af867b kubelet[27751]: I1115 01:59:43.461406 27751 http.go:96] Probe succeeded for http://127.0.0.1:2379/health, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 01:59:43 GMT] Content-Length:[18] Content-Type:[text/plain; charset=utf-8]] 0xc421438600 18 [] true false map[] 0xc420afd600 <nil>}
Nov 15 01:59:43 af867b kubelet[27751]: I1115 01:59:43.461479 27751 prober.go:113] Liveness probe for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250):etcd" succeeded
Nov 15 01:59:43 af867b kubelet[27751]: I1115 01:59:43.617343 27751 prober.go:160] HTTP-Probe Host: https://127.0.0.1, Port: 6443, Path: /healthz
Nov 15 01:59:43 af867b kubelet[27751]: I1115 01:59:43.617382 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 01:59:43 af867b kubelet[27751]: I1115 01:59:43.624651 27751 http.go:96] Probe succeeded for https://127.0.0.1:6443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Type:[text/plain; charset=utf-8] Content-Length:[2] Date:[Wed, 15 Nov 2017 01:59:43 GMT]] 0xc4212e1420 2 [] false false map[] 0xc420a30700 0xc421002840}
Nov 15 01:59:43 af867b kubelet[27751]: I1115 01:59:43.624709 27751 prober.go:113] Liveness probe for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d):kube-apiserver" succeeded
Nov 15 01:59:44 af867b kubelet[27751]: I1115 01:59:44.523440 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 01:59:45 af867b kubelet[27751]: W1115 01:59:45.829608 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 01:59:45 af867b kubelet[27751]: I1115 01:59:45.829816 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 01:59:45 af867b kubelet[27751]: E1115 01:59:45.829840 27751 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 01:59:46 af867b kubelet[27751]: I1115 01:59:46.523452 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 01:59:47 af867b kubelet[27751]: I1115 01:59:47.506092 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10252, Path: /healthz
Nov 15 01:59:47 af867b kubelet[27751]: I1115 01:59:47.506129 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 01:59:47 af867b kubelet[27751]: I1115 01:59:47.507009 27751 http.go:96] Probe succeeded for http://127.0.0.1:10252/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Length:[2] Content-Type:[text/plain; charset=utf-8] Date:[Wed, 15 Nov 2017 01:59:47 GMT]] 0xc420b4df60 2 [] true false map[] 0xc42110a600 <nil>}
Nov 15 01:59:47 af867b kubelet[27751]: I1115 01:59:47.507043 27751 prober.go:113] Liveness probe for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9):kube-controller-manager" succeeded
Nov 15 01:59:47 af867b kubelet[27751]: I1115 01:59:47.681637 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10251, Path: /healthz
Nov 15 01:59:47 af867b kubelet[27751]: I1115 01:59:47.681683 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 01:59:47 af867b kubelet[27751]: I1115 01:59:47.682860 27751 http.go:96] Probe succeeded for http://127.0.0.1:10251/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 01:59:47 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc421211aa0 2 [] true false map[] 0xc421178c00 <nil>}
Nov 15 01:59:47 af867b kubelet[27751]: I1115 01:59:47.682904 27751 prober.go:113] Liveness probe for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373):kube-scheduler" succeeded
Nov 15 01:59:48 af867b kubelet[27751]: I1115 01:59:48.523444 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.523525 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.525879 27751 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.542147 27751 kubelet.go:1222] Container garbage collection succeeded
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823071 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/run-user-1000.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823117 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/run-user-1000.mount", but ignoring.
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823127 27751 manager.go:901] ignoring container "/system.slice/run-user-1000.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823134 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/sys-kernel-debug.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823140 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/sys-kernel-debug.mount", but ignoring.
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823147 27751 manager.go:901] ignoring container "/system.slice/sys-kernel-debug.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823153 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-19b06ca14052c17f92bf03d5316fd7642fe4fee6bf193755575355be3e87a13d-shm.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823161 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-19b06ca14052c17f92bf03d5316fd7642fe4fee6bf193755575355be3e87a13d-shm.mount", but ignoring.
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823171 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-19b06ca14052c17f92bf03d5316fd7642fe4fee6bf193755575355be3e87a13d-shm.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823179 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/dev-hugepages.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823185 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/dev-hugepages.mount", but ignoring.
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823192 27751 manager.go:901] ignoring container "/system.slice/dev-hugepages.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823197 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-23d8892c53722cfb4d3aa6c5b126ad2674a7b8041b4873d30b98faefad6f5b44.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823205 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-23d8892c53722cfb4d3aa6c5b126ad2674a7b8041b4873d30b98faefad6f5b44.mount", but ignoring.
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823213 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-23d8892c53722cfb4d3aa6c5b126ad2674a7b8041b4873d30b98faefad6f5b44.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823220 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/-.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823226 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/-.mount", but ignoring.
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823233 27751 manager.go:901] ignoring container "/system.slice/-.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823237 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/dev-mqueue.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823243 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/dev-mqueue.mount", but ignoring.
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823250 27751 manager.go:901] ignoring container "/system.slice/dev-mqueue.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823255 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-faffc5ce0cf0da39c68377dff0a611eb436ee74dfce11692aecd1e723311ef80.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823263 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-faffc5ce0cf0da39c68377dff0a611eb436ee74dfce11692aecd1e723311ef80.mount", but ignoring.
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823272 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-faffc5ce0cf0da39c68377dff0a611eb436ee74dfce11692aecd1e723311ef80.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823279 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-d5076ad0e9fb270d1b8c4ff7cdadbf32d6b1e30dc42ae24dfbc5cb01bb5aa934-shm.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823286 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-d5076ad0e9fb270d1b8c4ff7cdadbf32d6b1e30dc42ae24dfbc5cb01bb5aa934-shm.mount", but ignoring.
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823294 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-d5076ad0e9fb270d1b8c4ff7cdadbf32d6b1e30dc42ae24dfbc5cb01bb5aa934-shm.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823301 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-6ddc96a63fb65ccd1743152170b39274e0eb3e4ccc4948bf2c0f5a6130ee9ce7.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823308 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-6ddc96a63fb65ccd1743152170b39274e0eb3e4ccc4948bf2c0f5a6130ee9ce7.mount", but ignoring.
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823316 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-6ddc96a63fb65ccd1743152170b39274e0eb3e4ccc4948bf2c0f5a6130ee9ce7.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823323 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/proc-xen.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823329 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/proc-xen.mount", but ignoring.
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823335 27751 manager.go:901] ignoring container "/system.slice/proc-xen.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823340 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/run-docker-netns-default.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823346 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/run-docker-netns-default.mount", but ignoring.
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823353 27751 manager.go:901] ignoring container "/system.slice/run-docker-netns-default.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823358 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-9d97c593c8a0b1ef2442048619ba4466fad52c408e686c9285a48450f3a6669f.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823365 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-9d97c593c8a0b1ef2442048619ba4466fad52c408e686c9285a48450f3a6669f.mount", but ignoring.
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823374 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-9d97c593c8a0b1ef2442048619ba4466fad52c408e686c9285a48450f3a6669f.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823381 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/boot.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823386 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/boot.mount", but ignoring.
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823393 27751 manager.go:901] ignoring container "/system.slice/boot.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823397 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/sys-kernel-config.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823404 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/sys-kernel-config.mount", but ignoring.
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823411 27751 manager.go:901] ignoring container "/system.slice/sys-kernel-config.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823416 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-fe18d337ce42bd3a4d2aacb1349be028274681d8073fbe5d6377946e815fa810-shm.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823423 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-fe18d337ce42bd3a4d2aacb1349be028274681d8073fbe5d6377946e815fa810-shm.mount", but ignoring.
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823431 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-fe18d337ce42bd3a4d2aacb1349be028274681d8073fbe5d6377946e815fa810-shm.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823438 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-8c430fd9b74591760bf92ca717db0989293743473d027f75b1dfded4d0661504.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823445 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-8c430fd9b74591760bf92ca717db0989293743473d027f75b1dfded4d0661504.mount", but ignoring.
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823454 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-8c430fd9b74591760bf92ca717db0989293743473d027f75b1dfded4d0661504.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823461 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823468 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper.mount", but ignoring.
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823475 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823480 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-439f518d46ec0d596518b7c17c503c52a85dd4da21d8a56ec174d5b6b3b98e03-shm.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823487 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-439f518d46ec0d596518b7c17c503c52a85dd4da21d8a56ec174d5b6b3b98e03-shm.mount", but ignoring.
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823496 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-439f518d46ec0d596518b7c17c503c52a85dd4da21d8a56ec174d5b6b3b98e03-shm.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823503 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/u01-applicationSpace.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823509 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/u01-applicationSpace.mount", but ignoring.
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823516 27751 manager.go:901] ignoring container "/system.slice/u01-applicationSpace.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823521 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-29c1400e918b28b9982788b4c8b07c46b1767efcd8b96000e44bc78da639994b.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823528 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-29c1400e918b28b9982788b4c8b07c46b1767efcd8b96000e44bc78da639994b.mount", but ignoring.
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823538 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-29c1400e918b28b9982788b4c8b07c46b1767efcd8b96000e44bc78da639994b.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823545 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-82408ec054eef4c0bce3519663bc7e5cbaff7a4fea1d8b7490fa3cc5a74bac60.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823552 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-82408ec054eef4c0bce3519663bc7e5cbaff7a4fea1d8b7490fa3cc5a74bac60.mount", but ignoring.
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823561 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-82408ec054eef4c0bce3519663bc7e5cbaff7a4fea1d8b7490fa3cc5a74bac60.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823569 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-cd27b64716d8032aa3ea595ec121a222dc749f2f903149a780eb711106fb0c53.mount"
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823576 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-cd27b64716d8032aa3ea595ec121a222dc749f2f903149a780eb711106fb0c53.mount", but ignoring.
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.823584 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-cd27b64716d8032aa3ea595ec121a222dc749f2f903149a780eb711106fb0c53.mount"
Nov 15 01:59:50 af867b kubelet[27751]: W1115 01:59:50.831733 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 01:59:50 af867b kubelet[27751]: I1115 01:59:50.831896 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 01:59:50 af867b kubelet[27751]: E1115 01:59:50.831922 27751 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 01:59:51 af867b kubelet[27751]: I1115 01:59:51.011664 27751 eviction_manager.go:221] eviction manager: synchronize housekeeping
Nov 15 01:59:51 af867b kubelet[27751]: I1115 01:59:51.074753 27751 helpers.go:871] eviction manager: observations: signal=imagefs.available, available: 41082368Ki, capacity: 45Gi, time: 2017-11-15 01:59:36.700965635 +0000 UTC
Nov 15 01:59:51 af867b kubelet[27751]: I1115 01:59:51.074808 27751 helpers.go:873] eviction manager: observations: signal=allocatableMemory.available, available: 7042836Ki, capacity: 7393360Ki
Nov 15 01:59:51 af867b kubelet[27751]: I1115 01:59:51.074817 27751 helpers.go:873] eviction manager: observations: signal=allocatableNodeFs.available, available: 9624040228, capacity: 10198Mi
Nov 15 01:59:51 af867b kubelet[27751]: I1115 01:59:51.074826 27751 helpers.go:871] eviction manager: observations: signal=memory.available, available: 6621280Ki, capacity: 7393360Ki, time: 2017-11-15 01:59:36.700965635 +0000 UTC
Nov 15 01:59:51 af867b kubelet[27751]: I1115 01:59:51.074836 27751 helpers.go:871] eviction manager: observations: signal=nodefs.available, available: 7419912Ki, capacity: 10198Mi, time: 2017-11-15 01:59:36.700965635 +0000 UTC
Nov 15 01:59:51 af867b kubelet[27751]: I1115 01:59:51.074848 27751 helpers.go:871] eviction manager: observations: signal=nodefs.inodesFree, available: 10384825, capacity: 10208Ki, time: 2017-11-15 01:59:36.700965635 +0000 UTC
Nov 15 01:59:51 af867b kubelet[27751]: I1115 01:59:51.074875 27751 eviction_manager.go:325] eviction manager: no resources are starved
Nov 15 01:59:52 af867b kubelet[27751]: I1115 01:59:52.523554 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 01:59:52 af867b kubelet[27751]: I1115 01:59:52.960029 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 2379, Path: /health
Nov 15 01:59:52 af867b kubelet[27751]: I1115 01:59:52.960085 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 01:59:53 af867b kubelet[27751]: I1115 01:59:53.461958 27751 http.go:96] Probe succeeded for http://127.0.0.1:2379/health, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 01:59:53 GMT] Content-Length:[18] Content-Type:[text/plain; charset=utf-8]] 0xc420db2200 18 [] true false map[] 0xc420c90100 <nil>}
Nov 15 01:59:53 af867b kubelet[27751]: I1115 01:59:53.462087 27751 prober.go:113] Liveness probe for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250):etcd" succeeded
Nov 15 01:59:53 af867b kubelet[27751]: I1115 01:59:53.617367 27751 prober.go:160] HTTP-Probe Host: https://127.0.0.1, Port: 6443, Path: /healthz
Nov 15 01:59:53 af867b kubelet[27751]: I1115 01:59:53.617423 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 01:59:53 af867b kubelet[27751]: I1115 01:59:53.626431 27751 http.go:96] Probe succeeded for https://127.0.0.1:6443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Type:[text/plain; charset=utf-8] Content-Length:[2] Date:[Wed, 15 Nov 2017 01:59:53 GMT]] 0xc42103a6e0 2 [] false false map[] 0xc420430a00 0xc420e78c60}
Nov 15 01:59:53 af867b kubelet[27751]: I1115 01:59:53.626526 27751 prober.go:113] Liveness probe for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d):kube-apiserver" succeeded
Nov 15 01:59:54 af867b kubelet[27751]: I1115 01:59:54.523466 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 01:59:55 af867b kubelet[27751]: W1115 01:59:55.833311 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 01:59:55 af867b kubelet[27751]: I1115 01:59:55.833480 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 01:59:55 af867b kubelet[27751]: E1115 01:59:55.833512 27751 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 01:59:56 af867b kubelet[27751]: I1115 01:59:56.523454 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 01:59:57 af867b kubelet[27751]: I1115 01:59:57.506158 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10252, Path: /healthz
Nov 15 01:59:57 af867b kubelet[27751]: I1115 01:59:57.506206 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 01:59:57 af867b kubelet[27751]: I1115 01:59:57.507135 27751 http.go:96] Probe succeeded for http://127.0.0.1:10252/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 01:59:57 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc4206ab160 2 [] true false map[] 0xc420a30300 <nil>}
Nov 15 01:59:57 af867b kubelet[27751]: I1115 01:59:57.507172 27751 prober.go:113] Liveness probe for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9):kube-controller-manager" succeeded
Nov 15 01:59:57 af867b kubelet[27751]: I1115 01:59:57.681741 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10251, Path: /healthz
Nov 15 01:59:57 af867b kubelet[27751]: I1115 01:59:57.681787 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 01:59:57 af867b kubelet[27751]: I1115 01:59:57.683527 27751 http.go:96] Probe succeeded for http://127.0.0.1:10251/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 01:59:57 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc420c98b80 2 [] true false map[] 0xc420a30700 <nil>}
Nov 15 01:59:57 af867b kubelet[27751]: I1115 01:59:57.683586 27751 prober.go:113] Liveness probe for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373):kube-scheduler" succeeded
Nov 15 01:59:58 af867b kubelet[27751]: I1115 01:59:58.523483 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:00:00 af867b kubelet[27751]: I1115 02:00:00.523681 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:00:00 af867b kubelet[27751]: W1115 02:00:00.834797 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 02:00:00 af867b kubelet[27751]: I1115 02:00:00.835770 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 02:00:00 af867b kubelet[27751]: E1115 02:00:00.835806 27751 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 02:00:01 af867b kubelet[27751]: I1115 02:00:01.075087 27751 eviction_manager.go:221] eviction manager: synchronize housekeeping
Nov 15 02:00:01 af867b kubelet[27751]: I1115 02:00:01.120313 27751 helpers.go:871] eviction manager: observations: signal=memory.available, available: 6617608Ki, capacity: 7393360Ki, time: 2017-11-15 01:59:52.517760792 +0000 UTC
Nov 15 02:00:01 af867b kubelet[27751]: I1115 02:00:01.120356 27751 helpers.go:871] eviction manager: observations: signal=nodefs.available, available: 7419916Ki, capacity: 10198Mi, time: 2017-11-15 01:59:52.517760792 +0000 UTC
Nov 15 02:00:01 af867b kubelet[27751]: I1115 02:00:01.120367 27751 helpers.go:871] eviction manager: observations: signal=nodefs.inodesFree, available: 10384831, capacity: 10208Ki, time: 2017-11-15 01:59:52.517760792 +0000 UTC
Nov 15 02:00:01 af867b kubelet[27751]: I1115 02:00:01.120375 27751 helpers.go:871] eviction manager: observations: signal=imagefs.available, available: 41082368Ki, capacity: 45Gi, time: 2017-11-15 01:59:52.517760792 +0000 UTC
Nov 15 02:00:01 af867b kubelet[27751]: I1115 02:00:01.120385 27751 helpers.go:873] eviction manager: observations: signal=allocatableMemory.available, available: 7041148Ki, capacity: 7393360Ki
Nov 15 02:00:01 af867b kubelet[27751]: I1115 02:00:01.120392 27751 helpers.go:873] eviction manager: observations: signal=allocatableNodeFs.available, available: 9624040228, capacity: 10198Mi
Nov 15 02:00:01 af867b kubelet[27751]: I1115 02:00:01.120414 27751 eviction_manager.go:325] eviction manager: no resources are starved
Nov 15 02:00:02 af867b kubelet[27751]: I1115 02:00:02.523566 27751 kubelet.go:1890] SyncLoop (SYNC): 1 pods; kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)
Nov 15 02:00:02 af867b kubelet[27751]: I1115 02:00:02.523659 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:00:02 af867b kubelet[27751]: I1115 02:00:02.526759 27751 kubelet_pods.go:1284] Generating status for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 02:00:02 af867b kubelet[27751]: I1115 02:00:02.529722 27751 status_manager.go:325] Ignoring same status for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:58 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 01:58:55 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-controller-manager State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 01:58:57 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/google_containers/kube-controller-manager-amd64:v1.8.3 ImageID:docker-pullable://gcr.io/google_containers/kube-controller-manager-amd64@sha256:b6b633e3e107761d38fceb200f01bf552c51f65e3524b0aafc1a7710afff07be ContainerID:docker://272af7e3b2b1a203250d154349fdf77f296d7b7f65ce2c77b6b3a94e53dba356}] QOSClass:Burstable}
Nov 15 02:00:02 af867b kubelet[27751]: I1115 02:00:02.530006 27751 kubelet.go:1610] Creating a mirror pod for static pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 02:00:02 af867b kubelet[27751]: I1115 02:00:02.546808 27751 config.go:282] Setting pods for source api
Nov 15 02:00:02 af867b kubelet[27751]: I1115 02:00:02.547226 27751 config.go:404] Receiving a new pod "kube-controller-manager-af867b_kube-system(b15cbd7d-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:00:02 af867b kubelet[27751]: I1115 02:00:02.547285 27751 kubelet.go:1837] SyncLoop (ADD, "api"): "kube-controller-manager-af867b_kube-system(b15cbd7d-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:00:02 af867b kubelet[27751]: I1115 02:00:02.546811 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 02:00:02 af867b kubelet[27751]: I1115 02:00:02.847616 27751 volume_manager.go:366] All volumes are attached and mounted for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 02:00:02 af867b kubelet[27751]: I1115 02:00:02.847846 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:439f518d46ec0d596518b7c17c503c52a85dd4da21d8a56ec174d5b6b3b98e03 Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 02:00:02 af867b kubelet[27751]: I1115 02:00:02.960006 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 2379, Path: /health
Nov 15 02:00:02 af867b kubelet[27751]: I1115 02:00:02.960060 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:00:03 af867b kubelet[27751]: I1115 02:00:03.461643 27751 http.go:96] Probe succeeded for http://127.0.0.1:2379/health, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:00:03 GMT] Content-Length:[18] Content-Type:[text/plain; charset=utf-8]] 0xc421786220 18 [] true false map[] 0xc420a30900 <nil>}
Nov 15 02:00:03 af867b kubelet[27751]: I1115 02:00:03.461717 27751 prober.go:113] Liveness probe for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250):etcd" succeeded
Nov 15 02:00:03 af867b kubelet[27751]: I1115 02:00:03.523459 27751 kubelet.go:1890] SyncLoop (SYNC): 1 pods; kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)
Nov 15 02:00:03 af867b kubelet[27751]: I1115 02:00:03.523582 27751 kubelet_pods.go:1284] Generating status for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 02:00:03 af867b kubelet[27751]: I1115 02:00:03.523810 27751 status_manager.go:325] Ignoring same status for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:58 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 01:58:55 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-scheduler State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 01:58:57 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/google_containers/kube-scheduler-amd64:v1.8.3 ImageID:docker-pullable://gcr.io/google_containers/kube-scheduler-amd64@sha256:c47b2438bbab28d58e8cbf64b37b7f66d26b000f5c3a31626ee829a4be8fb91e ContainerID:docker://413566b22305750f9a9aa46fbe256c11e75293e80f6e0d4afb7ec9e6afcdee05}] QOSClass:Burstable}
Nov 15 02:00:03 af867b kubelet[27751]: I1115 02:00:03.524003 27751 kubelet.go:1610] Creating a mirror pod for static pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 02:00:03 af867b kubelet[27751]: I1115 02:00:03.530005 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 02:00:03 af867b kubelet[27751]: I1115 02:00:03.530173 27751 config.go:282] Setting pods for source api
Nov 15 02:00:03 af867b kubelet[27751]: I1115 02:00:03.530617 27751 config.go:404] Receiving a new pod "kube-scheduler-af867b_kube-system(b1f2f82f-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:00:03 af867b kubelet[27751]: I1115 02:00:03.530672 27751 kubelet.go:1837] SyncLoop (ADD, "api"): "kube-scheduler-af867b_kube-system(b1f2f82f-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:00:03 af867b kubelet[27751]: I1115 02:00:03.617380 27751 prober.go:160] HTTP-Probe Host: https://127.0.0.1, Port: 6443, Path: /healthz
Nov 15 02:00:03 af867b kubelet[27751]: I1115 02:00:03.617432 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:00:03 af867b kubelet[27751]: I1115 02:00:03.624624 27751 http.go:96] Probe succeeded for https://127.0.0.1:6443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Type:[text/plain; charset=utf-8] Content-Length:[2] Date:[Wed, 15 Nov 2017 02:00:03 GMT]] 0xc4216f0fc0 2 [] false false map[] 0xc420d4a600 0xc42127ca50}
Nov 15 02:00:03 af867b kubelet[27751]: I1115 02:00:03.624663 27751 prober.go:113] Liveness probe for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d):kube-apiserver" succeeded
Nov 15 02:00:03 af867b kubelet[27751]: I1115 02:00:03.658331 27751 kubelet_pods.go:1284] Generating status for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 02:00:03 af867b kubelet[27751]: I1115 02:00:03.658511 27751 status_manager.go:325] Ignoring same status for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:58 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 01:58:55 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-controller-manager State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 01:58:57 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/google_containers/kube-controller-manager-amd64:v1.8.3 ImageID:docker-pullable://gcr.io/google_containers/kube-controller-manager-amd64@sha256:b6b633e3e107761d38fceb200f01bf552c51f65e3524b0aafc1a7710afff07be ContainerID:docker://272af7e3b2b1a203250d154349fdf77f296d7b7f65ce2c77b6b3a94e53dba356}] QOSClass:Burstable}
Nov 15 02:00:03 af867b kubelet[27751]: I1115 02:00:03.658695 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 02:00:03 af867b kubelet[27751]: I1115 02:00:03.830256 27751 volume_manager.go:366] All volumes are attached and mounted for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 02:00:03 af867b kubelet[27751]: I1115 02:00:03.830448 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:fe18d337ce42bd3a4d2aacb1349be028274681d8073fbe5d6377946e815fa810 Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 02:00:03 af867b kubelet[27751]: I1115 02:00:03.958994 27751 volume_manager.go:366] All volumes are attached and mounted for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 02:00:03 af867b kubelet[27751]: I1115 02:00:03.959207 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:439f518d46ec0d596518b7c17c503c52a85dd4da21d8a56ec174d5b6b3b98e03 Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 02:00:04 af867b kubelet[27751]: I1115 02:00:04.523465 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:00:04 af867b kubelet[27751]: I1115 02:00:04.661690 27751 kubelet_pods.go:1284] Generating status for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 02:00:04 af867b kubelet[27751]: I1115 02:00:04.661883 27751 status_manager.go:325] Ignoring same status for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:58 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 01:58:55 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-scheduler State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 01:58:57 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/google_containers/kube-scheduler-amd64:v1.8.3 ImageID:docker-pullable://gcr.io/google_containers/kube-scheduler-amd64@sha256:c47b2438bbab28d58e8cbf64b37b7f66d26b000f5c3a31626ee829a4be8fb91e ContainerID:docker://413566b22305750f9a9aa46fbe256c11e75293e80f6e0d4afb7ec9e6afcdee05}] QOSClass:Burstable}
Nov 15 02:00:04 af867b kubelet[27751]: I1115 02:00:04.662048 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 02:00:04 af867b kubelet[27751]: I1115 02:00:04.962285 27751 volume_manager.go:366] All volumes are attached and mounted for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 02:00:04 af867b kubelet[27751]: I1115 02:00:04.962460 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:fe18d337ce42bd3a4d2aacb1349be028274681d8073fbe5d6377946e815fa810 Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 02:00:05 af867b kubelet[27751]: W1115 02:00:05.836982 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 02:00:05 af867b kubelet[27751]: I1115 02:00:05.837604 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 02:00:05 af867b kubelet[27751]: E1115 02:00:05.837631 27751 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 02:00:06 af867b kubelet[27751]: I1115 02:00:06.523449 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:00:07 af867b kubelet[27751]: I1115 02:00:07.506120 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10252, Path: /healthz
Nov 15 02:00:07 af867b kubelet[27751]: I1115 02:00:07.506162 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:00:07 af867b kubelet[27751]: I1115 02:00:07.507285 27751 http.go:96] Probe succeeded for http://127.0.0.1:10252/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Type:[text/plain; charset=utf-8] Date:[Wed, 15 Nov 2017 02:00:07 GMT] Content-Length:[2]] 0xc421171d20 2 [] true false map[] 0xc420431400 <nil>}
Nov 15 02:00:07 af867b kubelet[27751]: I1115 02:00:07.507329 27751 prober.go:113] Liveness probe for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9):kube-controller-manager" succeeded
Nov 15 02:00:07 af867b kubelet[27751]: I1115 02:00:07.681634 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10251, Path: /healthz
Nov 15 02:00:07 af867b kubelet[27751]: I1115 02:00:07.681656 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:00:07 af867b kubelet[27751]: I1115 02:00:07.682836 27751 http.go:96] Probe succeeded for http://127.0.0.1:10251/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:00:07 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc4212e0e60 2 [] true false map[] 0xc420431800 <nil>}
Nov 15 02:00:07 af867b kubelet[27751]: I1115 02:00:07.682884 27751 prober.go:113] Liveness probe for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373):kube-scheduler" succeeded
Nov 15 02:00:08 af867b kubelet[27751]: I1115 02:00:08.523445 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:00:10 af867b kubelet[27751]: I1115 02:00:10.523462 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:00:10 af867b kubelet[27751]: I1115 02:00:10.530782 27751 config.go:282] Setting pods for source api
Nov 15 02:00:10 af867b kubelet[27751]: I1115 02:00:10.531554 27751 status_manager.go:451] Status for pod "kube-controller-manager-af867b_kube-system(b15cbd7d-c9a8-11e7-89f4-c6b053eac242)" updated successfully: (2, {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:58 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 01:58:55 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-controller-manager State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 01:58:57 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/google_containers/kube-controller-manager-amd64:v1.8.3 ImageID:docker-pullable://gcr.io/google_containers/kube-controller-manager-amd64@sha256:b6b633e3e107761d38fceb200f01bf552c51f65e3524b0aafc1a7710afff07be ContainerID:docker://272af7e3b2b1a203250d154349fdf77f296d7b7f65ce2c77b6b3a94e53dba356}] QOSClass:Burstable})
Nov 15 02:00:10 af867b kubelet[27751]: I1115 02:00:10.539640 27751 kubelet.go:1850] SyncLoop (RECONCILE, "api"): "kube-controller-manager-af867b_kube-system(b15cbd7d-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:00:10 af867b kubelet[27751]: I1115 02:00:10.543515 27751 status_manager.go:451] Status for pod "kube-scheduler-af867b_kube-system(b1f2f82f-c9a8-11e7-89f4-c6b053eac242)" updated successfully: (2, {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:58 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 01:58:55 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-scheduler State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 01:58:57 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/google_containers/kube-scheduler-amd64:v1.8.3 ImageID:docker-pullable://gcr.io/google_containers/kube-scheduler-amd64@sha256:c47b2438bbab28d58e8cbf64b37b7f66d26b000f5c3a31626ee829a4be8fb91e ContainerID:docker://413566b22305750f9a9aa46fbe256c11e75293e80f6e0d4afb7ec9e6afcdee05}] QOSClass:Burstable})
Nov 15 02:00:10 af867b kubelet[27751]: I1115 02:00:10.543690 27751 config.go:282] Setting pods for source api
Nov 15 02:00:10 af867b kubelet[27751]: I1115 02:00:10.544287 27751 kubelet.go:1850] SyncLoop (RECONCILE, "api"): "kube-scheduler-af867b_kube-system(b1f2f82f-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:00:10 af867b kubelet[27751]: W1115 02:00:10.838909 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 02:00:10 af867b kubelet[27751]: I1115 02:00:10.839087 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 02:00:10 af867b kubelet[27751]: E1115 02:00:10.839109 27751 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 02:00:11 af867b kubelet[27751]: I1115 02:00:11.120578 27751 eviction_manager.go:221] eviction manager: synchronize housekeeping
Nov 15 02:00:11 af867b kubelet[27751]: I1115 02:00:11.163882 27751 helpers.go:871] eviction manager: observations: signal=memory.available, available: 6617280Ki, capacity: 7393360Ki, time: 2017-11-15 02:00:05.296675797 +0000 UTC
Nov 15 02:00:11 af867b kubelet[27751]: I1115 02:00:11.163921 27751 helpers.go:871] eviction manager: observations: signal=nodefs.available, available: 7420096Ki, capacity: 10198Mi, time: 2017-11-15 02:00:05.296675797 +0000 UTC
Nov 15 02:00:11 af867b kubelet[27751]: I1115 02:00:11.163931 27751 helpers.go:871] eviction manager: observations: signal=nodefs.inodesFree, available: 10384831, capacity: 10208Ki, time: 2017-11-15 02:00:05.296675797 +0000 UTC
Nov 15 02:00:11 af867b kubelet[27751]: I1115 02:00:11.163939 27751 helpers.go:871] eviction manager: observations: signal=imagefs.available, available: 40107Mi, capacity: 45Gi, time: 2017-11-15 02:00:05.296675797 +0000 UTC
Nov 15 02:00:11 af867b kubelet[27751]: I1115 02:00:11.163947 27751 helpers.go:873] eviction manager: observations: signal=allocatableMemory.available, available: 7040360Ki, capacity: 7393360Ki
Nov 15 02:00:11 af867b kubelet[27751]: I1115 02:00:11.163954 27751 helpers.go:873] eviction manager: observations: signal=allocatableNodeFs.available, available: 9624040228, capacity: 10198Mi
Nov 15 02:00:11 af867b kubelet[27751]: I1115 02:00:11.163972 27751 eviction_manager.go:325] eviction manager: no resources are starved
Nov 15 02:00:11 af867b kubelet[27751]: I1115 02:00:11.523448 27751 kubelet.go:1890] SyncLoop (SYNC): 1 pods; etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)
Nov 15 02:00:11 af867b kubelet[27751]: I1115 02:00:11.523563 27751 kubelet_pods.go:1284] Generating status for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 02:00:11 af867b kubelet[27751]: I1115 02:00:11.524141 27751 status_manager.go:325] Ignoring same status for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:59 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 01:58:55 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:etcd State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 01:58:58 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/google_containers/etcd-amd64:3.0.17 ImageID:docker-pullable://gcr.io/google_containers/etcd-amd64@sha256:d83d3545e06fb035db8512e33bd44afb55dea007a3abd7b17742d3ac6d235940 ContainerID:docker://ab147bb1b65b2001333417bb7654896e6aadb25ce71a8c48c94ae802a2e0197f}] QOSClass:BestEffort}
Nov 15 02:00:11 af867b kubelet[27751]: I1115 02:00:11.524857 27751 kubelet.go:1610] Creating a mirror pod for static pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 02:00:11 af867b kubelet[27751]: I1115 02:00:11.536029 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 02:00:11 af867b kubelet[27751]: I1115 02:00:11.536189 27751 config.go:282] Setting pods for source api
Nov 15 02:00:11 af867b kubelet[27751]: I1115 02:00:11.536879 27751 config.go:404] Receiving a new pod "etcd-af867b_kube-system(b6b89970-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:00:11 af867b kubelet[27751]: I1115 02:00:11.536932 27751 kubelet.go:1837] SyncLoop (ADD, "api"): "etcd-af867b_kube-system(b6b89970-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:00:11 af867b kubelet[27751]: I1115 02:00:11.836282 27751 volume_manager.go:366] All volumes are attached and mounted for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 02:00:11 af867b kubelet[27751]: I1115 02:00:11.836452 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:d5076ad0e9fb270d1b8c4ff7cdadbf32d6b1e30dc42ae24dfbc5cb01bb5aa934 Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.523432 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.693892 27751 kubelet_pods.go:1284] Generating status for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.694076 27751 status_manager.go:325] Ignoring same status for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:59 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 01:58:55 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:etcd State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 01:58:58 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/google_containers/etcd-amd64:3.0.17 ImageID:docker-pullable://gcr.io/google_containers/etcd-amd64@sha256:d83d3545e06fb035db8512e33bd44afb55dea007a3abd7b17742d3ac6d235940 ContainerID:docker://ab147bb1b65b2001333417bb7654896e6aadb25ce71a8c48c94ae802a2e0197f}] QOSClass:BestEffort}
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.694273 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.809140 27751 config.go:282] Setting pods for source api
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.809734 27751 config.go:404] Receiving a new pod "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.810111 27751 kubelet.go:1837] SyncLoop (ADD, "api"): "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.810267 27751 kubelet_pods.go:1284] Generating status for "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.810576 27751 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.814331 27751 factory.go:116] Factory "docker" was unable to handle container "/kubepods/burstable/podb77b0858-c9a8-11e7-89f4-c6b053eac242"
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.814356 27751 factory.go:105] Error trying to work out if we can handle /kubepods/burstable/podb77b0858-c9a8-11e7-89f4-c6b053eac242: /kubepods/burstable/podb77b0858-c9a8-11e7-89f4-c6b053eac242 not handled by systemd handler
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.814364 27751 factory.go:116] Factory "systemd" was unable to handle container "/kubepods/burstable/podb77b0858-c9a8-11e7-89f4-c6b053eac242"
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.814372 27751 factory.go:112] Using factory "raw" for container "/kubepods/burstable/podb77b0858-c9a8-11e7-89f4-c6b053eac242"
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.814584 27751 manager.go:932] Added container: "/kubepods/burstable/podb77b0858-c9a8-11e7-89f4-c6b053eac242" (aliases: [], namespace: "")
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.814698 27751 handler.go:325] Added event &{/kubepods/burstable/podb77b0858-c9a8-11e7-89f4-c6b053eac242 2017-11-15 02:00:12.813194711 +0000 UTC containerCreation {<nil>}}
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.814750 27751 container.go:409] Start housekeeping for container "/kubepods/burstable/podb77b0858-c9a8-11e7-89f4-c6b053eac242"
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.817255 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.829638 27751 status_manager.go:451] Status for pod "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)" updated successfully: (1, {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:00:12 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:00:12 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [weave weave-npc]}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 02:00:12 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:weave State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:weaveworks/weave-kube:2.0.5 ImageID: ContainerID:} {Name:weave-npc State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:weaveworks/weave-npc:2.0.5 ImageID: ContainerID:}] QOSClass:Burstable})
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.830235 27751 config.go:282] Setting pods for source api
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.831180 27751 kubelet.go:1850] SyncLoop (RECONCILE, "api"): "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.920554 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-conf" (UniqueName: "kubernetes.io/host-path/b77b0858-c9a8-11e7-89f4-c6b053eac242-cni-conf") pod "weave-net-rg7fn" (UID: "b77b0858-c9a8-11e7-89f4-c6b053eac242")
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.920610 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-bin2" (UniqueName: "kubernetes.io/host-path/b77b0858-c9a8-11e7-89f4-c6b053eac242-cni-bin2") pod "weave-net-rg7fn" (UID: "b77b0858-c9a8-11e7-89f4-c6b053eac242")
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.920642 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/b77b0858-c9a8-11e7-89f4-c6b053eac242-xtables-lock") pod "weave-net-rg7fn" (UID: "b77b0858-c9a8-11e7-89f4-c6b053eac242")
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.920670 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-bin" (UniqueName: "kubernetes.io/host-path/b77b0858-c9a8-11e7-89f4-c6b053eac242-cni-bin") pod "weave-net-rg7fn" (UID: "b77b0858-c9a8-11e7-89f4-c6b053eac242")
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.920695 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "dbus" (UniqueName: "kubernetes.io/host-path/b77b0858-c9a8-11e7-89f4-c6b053eac242-dbus") pod "weave-net-rg7fn" (UID: "b77b0858-c9a8-11e7-89f4-c6b053eac242")
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.920741 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "weavedb" (UniqueName: "kubernetes.io/host-path/b77b0858-c9a8-11e7-89f4-c6b053eac242-weavedb") pod "weave-net-rg7fn" (UID: "b77b0858-c9a8-11e7-89f4-c6b053eac242")
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.920833 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/b77b0858-c9a8-11e7-89f4-c6b053eac242-lib-modules") pod "weave-net-rg7fn" (UID: "b77b0858-c9a8-11e7-89f4-c6b053eac242")
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.920900 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "weave-net-token-rn6j7" (UniqueName: "kubernetes.io/secret/b77b0858-c9a8-11e7-89f4-c6b053eac242-weave-net-token-rn6j7") pod "weave-net-rg7fn" (UID: "b77b0858-c9a8-11e7-89f4-c6b053eac242")
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.959969 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 2379, Path: /health
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.960014 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.994554 27751 volume_manager.go:366] All volumes are attached and mounted for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 02:00:12 af867b kubelet[27751]: I1115 02:00:12.994698 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:d5076ad0e9fb270d1b8c4ff7cdadbf32d6b1e30dc42ae24dfbc5cb01bb5aa934 Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.021193 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "dbus" (UniqueName: "kubernetes.io/host-path/b77b0858-c9a8-11e7-89f4-c6b053eac242-dbus") pod "weave-net-rg7fn" (UID: "b77b0858-c9a8-11e7-89f4-c6b053eac242")
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.021253 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "cni-bin" (UniqueName: "kubernetes.io/host-path/b77b0858-c9a8-11e7-89f4-c6b053eac242-cni-bin") pod "weave-net-rg7fn" (UID: "b77b0858-c9a8-11e7-89f4-c6b053eac242")
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.021289 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "weavedb" (UniqueName: "kubernetes.io/host-path/b77b0858-c9a8-11e7-89f4-c6b053eac242-weavedb") pod "weave-net-rg7fn" (UID: "b77b0858-c9a8-11e7-89f4-c6b053eac242")
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.021324 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/b77b0858-c9a8-11e7-89f4-c6b053eac242-lib-modules") pod "weave-net-rg7fn" (UID: "b77b0858-c9a8-11e7-89f4-c6b053eac242")
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.021366 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "weave-net-token-rn6j7" (UniqueName: "kubernetes.io/secret/b77b0858-c9a8-11e7-89f4-c6b053eac242-weave-net-token-rn6j7") pod "weave-net-rg7fn" (UID: "b77b0858-c9a8-11e7-89f4-c6b053eac242")
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.021393 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "cni-bin2" (UniqueName: "kubernetes.io/host-path/b77b0858-c9a8-11e7-89f4-c6b053eac242-cni-bin2") pod "weave-net-rg7fn" (UID: "b77b0858-c9a8-11e7-89f4-c6b053eac242")
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.021428 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "cni-conf" (UniqueName: "kubernetes.io/host-path/b77b0858-c9a8-11e7-89f4-c6b053eac242-cni-conf") pod "weave-net-rg7fn" (UID: "b77b0858-c9a8-11e7-89f4-c6b053eac242")
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.021459 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/b77b0858-c9a8-11e7-89f4-c6b053eac242-xtables-lock") pod "weave-net-rg7fn" (UID: "b77b0858-c9a8-11e7-89f4-c6b053eac242")
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.021546 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/b77b0858-c9a8-11e7-89f4-c6b053eac242-xtables-lock") pod "weave-net-rg7fn" (UID: "b77b0858-c9a8-11e7-89f4-c6b053eac242")
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.022012 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "dbus" (UniqueName: "kubernetes.io/host-path/b77b0858-c9a8-11e7-89f4-c6b053eac242-dbus") pod "weave-net-rg7fn" (UID: "b77b0858-c9a8-11e7-89f4-c6b053eac242")
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.022114 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "cni-bin" (UniqueName: "kubernetes.io/host-path/b77b0858-c9a8-11e7-89f4-c6b053eac242-cni-bin") pod "weave-net-rg7fn" (UID: "b77b0858-c9a8-11e7-89f4-c6b053eac242")
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.022158 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "weavedb" (UniqueName: "kubernetes.io/host-path/b77b0858-c9a8-11e7-89f4-c6b053eac242-weavedb") pod "weave-net-rg7fn" (UID: "b77b0858-c9a8-11e7-89f4-c6b053eac242")
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.022197 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/b77b0858-c9a8-11e7-89f4-c6b053eac242-lib-modules") pod "weave-net-rg7fn" (UID: "b77b0858-c9a8-11e7-89f4-c6b053eac242")
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.022231 27751 secret.go:186] Setting up volume weave-net-token-rn6j7 for pod b77b0858-c9a8-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/b77b0858-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/weave-net-token-rn6j7
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.022433 27751 empty_dir.go:264] pod b77b0858-c9a8-11e7-89f4-c6b053eac242: mounting tmpfs for volume wrapped_weave-net-token-rn6j7
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.022452 27751 mount_linux.go:135] Mounting cmd (systemd-run) with arguments ([--description=Kubernetes transient mount for /var/lib/kubelet/pods/b77b0858-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/weave-net-token-rn6j7 --scope -- mount -t tmpfs tmpfs /var/lib/kubelet/pods/b77b0858-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/weave-net-token-rn6j7])
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.024265 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "cni-conf" (UniqueName: "kubernetes.io/host-path/b77b0858-c9a8-11e7-89f4-c6b053eac242-cni-conf") pod "weave-net-rg7fn" (UID: "b77b0858-c9a8-11e7-89f4-c6b053eac242")
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.024399 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "cni-bin2" (UniqueName: "kubernetes.io/host-path/b77b0858-c9a8-11e7-89f4-c6b053eac242-cni-bin2") pod "weave-net-rg7fn" (UID: "b77b0858-c9a8-11e7-89f4-c6b053eac242")
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.024437 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"weave-net-rg7fn", UID:"b77b0858-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"381", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "xtables-lock"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.024467 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"weave-net-rg7fn", UID:"b77b0858-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"381", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "dbus"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.024495 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"weave-net-rg7fn", UID:"b77b0858-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"381", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "cni-bin"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.024505 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"weave-net-rg7fn", UID:"b77b0858-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"381", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "weavedb"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.024523 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"weave-net-rg7fn", UID:"b77b0858-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"381", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "lib-modules"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.024532 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"weave-net-rg7fn", UID:"b77b0858-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"381", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "cni-conf"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.024556 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"weave-net-rg7fn", UID:"b77b0858-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"381", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "cni-bin2"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.033146 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/run-29268.scope"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.033169 27751 factory.go:105] Error trying to work out if we can handle /system.slice/run-29268.scope: /system.slice/run-29268.scope not handled by systemd handler
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.033175 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/run-29268.scope"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.033182 27751 factory.go:112] Using factory "raw" for container "/system.slice/run-29268.scope"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.033358 27751 manager.go:932] Added container: "/system.slice/run-29268.scope" (aliases: [], namespace: "")
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.033476 27751 handler.go:325] Added event &{/system.slice/run-29268.scope 2017-11-15 02:00:13.031194711 +0000 UTC containerCreation {<nil>}}
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.033506 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-984a35c7cf6d04c0af5fcd260872a7b6556aa9cd8b38bc1a1cf635972c1d6511.mount"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.033518 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-984a35c7cf6d04c0af5fcd260872a7b6556aa9cd8b38bc1a1cf635972c1d6511.mount", but ignoring.
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.033528 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-984a35c7cf6d04c0af5fcd260872a7b6556aa9cd8b38bc1a1cf635972c1d6511.mount"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.033537 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-91510886f3beb621e5d04309d502352ff78392e46405f21633802da4f7047069-shm.mount"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.033545 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-91510886f3beb621e5d04309d502352ff78392e46405f21633802da4f7047069-shm.mount", but ignoring.
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.033554 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-91510886f3beb621e5d04309d502352ff78392e46405f21633802da4f7047069-shm.mount"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.033577 27751 container.go:409] Start housekeeping for container "/system.slice/run-29268.scope"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.039932 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-kubelet-pods-9729c03a\\x2dc9a8\\x2d11e7\\x2d89f4\\x2dc6b053eac242-volumes-kubernetes.io\\x7esecret-kube\\x2dproxy\\x2dtoken\\x2dgqhfs.mount"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.039963 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-kubelet-pods-9729c03a\\x2dc9a8\\x2d11e7\\x2d89f4\\x2dc6b053eac242-volumes-kubernetes.io\\x7esecret-kube\\x2dproxy\\x2dtoken\\x2dgqhfs.mount", but ignoring.
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.039981 27751 manager.go:901] ignoring container "/system.slice/var-lib-kubelet-pods-9729c03a\\x2dc9a8\\x2d11e7\\x2d89f4\\x2dc6b053eac242-volumes-kubernetes.io\\x7esecret-kube\\x2dproxy\\x2dtoken\\x2dgqhfs.mount"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.039995 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-7d2ebdadb8c541a4839058dd46b491ef9e14209821b47e7d86b539de402fbef5.mount"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.040005 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-7d2ebdadb8c541a4839058dd46b491ef9e14209821b47e7d86b539de402fbef5.mount", but ignoring.
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.040015 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-7d2ebdadb8c541a4839058dd46b491ef9e14209821b47e7d86b539de402fbef5.mount"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.042819 27751 secret.go:217] Received secret kube-system/weave-net-token-rn6j7 containing (3) pieces of data, 1900 total bytes
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.042893 27751 atomic_writer.go:145] pod kube-system/weave-net-rg7fn volume weave-net-token-rn6j7: write required for target directory /var/lib/kubelet/pods/b77b0858-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/weave-net-token-rn6j7
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.042968 27751 atomic_writer.go:160] pod kube-system/weave-net-rg7fn volume weave-net-token-rn6j7: performed write of new data to ts data directory: /var/lib/kubelet/pods/b77b0858-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/weave-net-token-rn6j7/..119811_15_11_02_00_13.100287739
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.043046 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "weave-net-token-rn6j7" (UniqueName: "kubernetes.io/secret/b77b0858-c9a8-11e7-89f4-c6b053eac242-weave-net-token-rn6j7") pod "weave-net-rg7fn" (UID: "b77b0858-c9a8-11e7-89f4-c6b053eac242")
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.043075 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"weave-net-rg7fn", UID:"b77b0858-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"381", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "weave-net-token-rn6j7"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.044941 27751 manager.go:989] Destroyed container: "/system.slice/run-29268.scope" (aliases: [], namespace: "")
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.044961 27751 handler.go:325] Added event &{/system.slice/run-29268.scope 2017-11-15 02:00:13.044956891 +0000 UTC containerDeletion {<nil>}}
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.117571 27751 volume_manager.go:366] All volumes are attached and mounted for pod "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.117606 27751 kuberuntime_manager.go:370] No sandbox for pod "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)" can be found. Need to start a new one
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.117619 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:true CreateSandbox:true SandboxID: Attempt:0 NextInitContainerToStart:nil ContainersToStart:[0 1] ContainersToKill:map[]} for pod "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.117688 27751 kuberuntime_manager.go:565] SyncPod received new pod "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)", will create a sandbox for it
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.117699 27751 kuberuntime_manager.go:574] Stopping PodSandbox for "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)", will start new one
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.117720 27751 kuberuntime_manager.go:626] Creating sandbox for pod "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.119592 27751 expiration_cache.go:98] Entry version: {key:version obj:0xc4206acc80} has expired
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.120191 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/burstable/podb77b0858-c9a8-11e7-89f4-c6b053eac242"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.120217 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/burstable/podb77b0858-c9a8-11e7-89f4-c6b053eac242"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.211220 27751 http.go:96] Probe succeeded for http://127.0.0.1:2379/health, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:00:13 GMT] Content-Length:[18] Content-Type:[text/plain; charset=utf-8]] 0xc420f0dbc0 18 [] true false map[] 0xc420afd200 <nil>}
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.211285 27751 prober.go:113] Liveness probe for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250):etcd" succeeded
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.425860 27751 factory.go:112] Using factory "docker" for container "/kubepods/burstable/podb77b0858-c9a8-11e7-89f4-c6b053eac242/be14b0552100f929052c8e94e1b7cf587c4218e321d811314e58f2cf312f1c0f"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.427245 27751 docker_sandbox.go:691] Will attempt to re-write config file /var/lib/docker/containers/be14b0552100f929052c8e94e1b7cf587c4218e321d811314e58f2cf312f1c0f/resolv.conf with:
Nov 15 02:00:13 af867b kubelet[27751]: [nameserver 10.196.65.209 search opcwlaas.oraclecloud.internal. opcwlaas.oraclecloud.internal.]
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.427250 27751 manager.go:932] Added container: "/kubepods/burstable/podb77b0858-c9a8-11e7-89f4-c6b053eac242/be14b0552100f929052c8e94e1b7cf587c4218e321d811314e58f2cf312f1c0f" (aliases: [k8s_POD_weave-net-rg7fn_kube-system_b77b0858-c9a8-11e7-89f4-c6b053eac242_0 be14b0552100f929052c8e94e1b7cf587c4218e321d811314e58f2cf312f1c0f], namespace: "docker")
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.427395 27751 handler.go:325] Added event &{/kubepods/burstable/podb77b0858-c9a8-11e7-89f4-c6b053eac242/be14b0552100f929052c8e94e1b7cf587c4218e321d811314e58f2cf312f1c0f 2017-11-15 02:00:13.367194711 +0000 UTC containerCreation {<nil>}}
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.427435 27751 container.go:409] Start housekeeping for container "/kubepods/burstable/podb77b0858-c9a8-11e7-89f4-c6b053eac242/be14b0552100f929052c8e94e1b7cf587c4218e321d811314e58f2cf312f1c0f"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.427597 27751 kuberuntime_manager.go:640] Created PodSandbox "be14b0552100f929052c8e94e1b7cf587c4218e321d811314e58f2cf312f1c0f" for pod "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.429920 27751 kuberuntime_manager.go:705] Creating container &Container{Name:weave,Image:weaveworks/weave-kube:2.0.5,Command:[/home/weave/launch.sh],Args:[],WorkingDir:,Ports:[],Env:[{HOSTNAME EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {<nil>} 10m DecimalSI},},},VolumeMounts:[{weavedb false /weavedb <nil>} {cni-bin false /host/opt <nil>} {cni-bin2 false /host/home <nil>} {cni-conf false /host/etc <nil>} {dbus false /host/var/lib/dbus <nil>} {lib-modules false /lib/modules <nil>} {xtables-lock false /run/xtables.lock <nil>} {weave-net-token-rn6j7 true /var/run/secrets/kubernetes.io/serviceaccount <nil>}],LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/status,Port:6784,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[],TerminationMessagePolicy:File,} in pod weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.431028 27751 provider.go:119] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.431060 27751 config.go:131] looking for config.json at /var/lib/kubelet/config.json
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.431096 27751 config.go:131] looking for config.json at /config.json
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.431108 27751 config.go:131] looking for config.json at /.docker/config.json
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.431117 27751 config.go:131] looking for config.json at /.docker/config.json
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.431129 27751 config.go:101] looking for .dockercfg at /var/lib/kubelet/.dockercfg
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.431140 27751 config.go:101] looking for .dockercfg at /.dockercfg
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.431150 27751 config.go:101] looking for .dockercfg at /.dockercfg
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.431160 27751 config.go:101] looking for .dockercfg at /.dockercfg
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.431170 27751 provider.go:89] Unable to parse Docker config file: couldn't find valid .dockercfg after checking in [/var/lib/kubelet /]
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.431183 27751 kuberuntime_image.go:46] Pulling image "weaveworks/weave-kube:2.0.5" without credentials
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.431256 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"weave-net-rg7fn", UID:"b77b0858-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"381", FieldPath:"spec.containers{weave}"}): type: 'Normal' reason: 'Pulling' pulling image "weaveworks/weave-kube:2.0.5"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.617344 27751 prober.go:160] HTTP-Probe Host: https://127.0.0.1, Port: 6443, Path: /healthz
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.617380 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.623731 27751 http.go:96] Probe succeeded for https://127.0.0.1:6443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Type:[text/plain; charset=utf-8] Content-Length:[2] Date:[Wed, 15 Nov 2017 02:00:13 GMT]] 0xc42138a9a0 2 [] false false map[] 0xc420c91300 0xc420b44d10}
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.623775 27751 prober.go:113] Liveness probe for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d):kube-apiserver" succeeded
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.698029 27751 generic.go:146] GenericPLEG: b77b0858-c9a8-11e7-89f4-c6b053eac242/be14b0552100f929052c8e94e1b7cf587c4218e321d811314e58f2cf312f1c0f: non-existent -> running
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.699194 27751 kuberuntime_manager.go:833] getSandboxIDByPodUID got sandbox IDs ["be14b0552100f929052c8e94e1b7cf587c4218e321d811314e58f2cf312f1c0f"] for pod "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.701376 27751 generic.go:345] PLEG: Write status for weave-net-rg7fn/kube-system: &container.PodStatus{ID:"b77b0858-c9a8-11e7-89f4-c6b053eac242", Name:"weave-net-rg7fn", Namespace:"kube-system", IP:"", ContainerStatuses:[]*container.ContainerStatus{}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc4211e51d0)}} (err: <nil>)
Nov 15 02:00:13 af867b kubelet[27751]: I1115 02:00:13.701447 27751 kubelet.go:1871] SyncLoop (PLEG): "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)", event: &pleg.PodLifecycleEvent{ID:"b77b0858-c9a8-11e7-89f4-c6b053eac242", Type:"ContainerStarted", Data:"be14b0552100f929052c8e94e1b7cf587c4218e321d811314e58f2cf312f1c0f"}
Nov 15 02:00:14 af867b kubelet[27751]: I1115 02:00:14.523469 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:00:15 af867b kubelet[27751]: I1115 02:00:15.362590 27751 worker.go:164] Probe target container not found: weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242) - weave
Nov 15 02:00:15 af867b kubelet[27751]: I1115 02:00:15.523554 27751 kubelet.go:1890] SyncLoop (SYNC): 1 pods; kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)
Nov 15 02:00:15 af867b kubelet[27751]: I1115 02:00:15.523818 27751 kubelet_pods.go:1284] Generating status for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 02:00:15 af867b kubelet[27751]: I1115 02:00:15.524269 27751 status_manager.go:325] Ignoring same status for pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:58 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 01:58:55 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-apiserver State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 01:58:58 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/google_containers/kube-apiserver-amd64:v1.8.3 ImageID:docker-pullable://gcr.io/google_containers/kube-apiserver-amd64@sha256:872e3d4286a8ef4338df59945cb0d64c2622268ceb3e8a2ce7b52243279b02d0 ContainerID:docker://8da2c70a27f08a2f062af80b5708e01ac34ce76b42ab4a6eaa0288e2daf8a043}] QOSClass:Burstable}
Nov 15 02:00:15 af867b kubelet[27751]: I1115 02:00:15.524607 27751 kubelet.go:1610] Creating a mirror pod for static pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 02:00:15 af867b kubelet[27751]: I1115 02:00:15.539785 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 02:00:15 af867b kubelet[27751]: I1115 02:00:15.539976 27751 config.go:282] Setting pods for source api
Nov 15 02:00:15 af867b kubelet[27751]: I1115 02:00:15.540524 27751 config.go:404] Receiving a new pod "kube-apiserver-af867b_kube-system(b91a02e8-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:00:15 af867b kubelet[27751]: I1115 02:00:15.541971 27751 kubelet.go:1837] SyncLoop (ADD, "api"): "kube-apiserver-af867b_kube-system(b91a02e8-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:00:15 af867b kubelet[27751]: I1115 02:00:15.840265 27751 volume_manager.go:366] All volumes are attached and mounted for pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 02:00:15 af867b kubelet[27751]: I1115 02:00:15.840451 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:19b06ca14052c17f92bf03d5316fd7642fe4fee6bf193755575355be3e87a13d Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 02:00:15 af867b kubelet[27751]: W1115 02:00:15.841565 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 02:00:15 af867b kubelet[27751]: I1115 02:00:15.841822 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 02:00:15 af867b kubelet[27751]: E1115 02:00:15.841881 27751 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 02:00:16 af867b kubelet[27751]: I1115 02:00:16.523482 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:00:16 af867b kubelet[27751]: I1115 02:00:16.713391 27751 kubelet_pods.go:1284] Generating status for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 02:00:16 af867b kubelet[27751]: I1115 02:00:16.713624 27751 status_manager.go:325] Ignoring same status for pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:58 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 01:58:55 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-apiserver State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 01:58:58 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/google_containers/kube-apiserver-amd64:v1.8.3 ImageID:docker-pullable://gcr.io/google_containers/kube-apiserver-amd64@sha256:872e3d4286a8ef4338df59945cb0d64c2622268ceb3e8a2ce7b52243279b02d0 ContainerID:docker://8da2c70a27f08a2f062af80b5708e01ac34ce76b42ab4a6eaa0288e2daf8a043}] QOSClass:Burstable}
Nov 15 02:00:16 af867b kubelet[27751]: I1115 02:00:16.713817 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 02:00:17 af867b kubelet[27751]: I1115 02:00:17.014096 27751 volume_manager.go:366] All volumes are attached and mounted for pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 02:00:17 af867b kubelet[27751]: I1115 02:00:17.014297 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:19b06ca14052c17f92bf03d5316fd7642fe4fee6bf193755575355be3e87a13d Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 02:00:17 af867b kubelet[27751]: I1115 02:00:17.506132 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10252, Path: /healthz
Nov 15 02:00:17 af867b kubelet[27751]: I1115 02:00:17.506179 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:00:17 af867b kubelet[27751]: I1115 02:00:17.507170 27751 http.go:96] Probe succeeded for http://127.0.0.1:10252/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:00:17 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc4216c48c0 2 [] true false map[] 0xc4200dd600 <nil>}
Nov 15 02:00:17 af867b kubelet[27751]: I1115 02:00:17.507222 27751 prober.go:113] Liveness probe for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9):kube-controller-manager" succeeded
Nov 15 02:00:17 af867b kubelet[27751]: I1115 02:00:17.681694 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10251, Path: /healthz
Nov 15 02:00:17 af867b kubelet[27751]: I1115 02:00:17.681744 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:00:17 af867b kubelet[27751]: I1115 02:00:17.682678 27751 http.go:96] Probe succeeded for http://127.0.0.1:10251/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:00:17 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc42159a9e0 2 [] true false map[] 0xc4200dd800 <nil>}
Nov 15 02:00:17 af867b kubelet[27751]: I1115 02:00:17.682717 27751 prober.go:113] Liveness probe for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373):kube-scheduler" succeeded
Nov 15 02:00:18 af867b kubelet[27751]: I1115 02:00:18.525670 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:00:20 af867b kubelet[27751]: I1115 02:00:20.523542 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:00:20 af867b kubelet[27751]: I1115 02:00:20.532140 27751 status_manager.go:451] Status for pod "etcd-af867b_kube-system(b6b89970-c9a8-11e7-89f4-c6b053eac242)" updated successfully: (2, {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:59 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 01:58:55 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:etcd State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 01:58:58 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/google_containers/etcd-amd64:3.0.17 ImageID:docker-pullable://gcr.io/google_containers/etcd-amd64@sha256:d83d3545e06fb035db8512e33bd44afb55dea007a3abd7b17742d3ac6d235940 ContainerID:docker://ab147bb1b65b2001333417bb7654896e6aadb25ce71a8c48c94ae802a2e0197f}] QOSClass:BestEffort})
Nov 15 02:00:20 af867b kubelet[27751]: I1115 02:00:20.532466 27751 config.go:282] Setting pods for source api
Nov 15 02:00:20 af867b kubelet[27751]: I1115 02:00:20.540181 27751 config.go:282] Setting pods for source api
Nov 15 02:00:20 af867b kubelet[27751]: I1115 02:00:20.540703 27751 status_manager.go:451] Status for pod "kube-apiserver-af867b_kube-system(b91a02e8-c9a8-11e7-89f4-c6b053eac242)" updated successfully: (2, {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:58 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 01:58:55 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-apiserver State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 01:58:58 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/google_containers/kube-apiserver-amd64:v1.8.3 ImageID:docker-pullable://gcr.io/google_containers/kube-apiserver-amd64@sha256:872e3d4286a8ef4338df59945cb0d64c2622268ceb3e8a2ce7b52243279b02d0 ContainerID:docker://8da2c70a27f08a2f062af80b5708e01ac34ce76b42ab4a6eaa0288e2daf8a043}] QOSClass:Burstable})
Nov 15 02:00:20 af867b kubelet[27751]: I1115 02:00:20.550969 27751 kubelet.go:1850] SyncLoop (RECONCILE, "api"): "etcd-af867b_kube-system(b6b89970-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:00:20 af867b kubelet[27751]: I1115 02:00:20.551029 27751 kubelet.go:1850] SyncLoop (RECONCILE, "api"): "kube-apiserver-af867b_kube-system(b91a02e8-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:00:20 af867b kubelet[27751]: W1115 02:00:20.843145 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 02:00:20 af867b kubelet[27751]: I1115 02:00:20.843297 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 02:00:20 af867b kubelet[27751]: E1115 02:00:20.843320 27751 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 02:00:21 af867b kubelet[27751]: I1115 02:00:21.164195 27751 eviction_manager.go:221] eviction manager: synchronize housekeeping
Nov 15 02:00:21 af867b kubelet[27751]: I1115 02:00:21.228893 27751 helpers.go:873] eviction manager: observations: signal=allocatableMemory.available, available: 7040084Ki, capacity: 7393360Ki
Nov 15 02:00:21 af867b kubelet[27751]: I1115 02:00:21.228930 27751 helpers.go:873] eviction manager: observations: signal=allocatableNodeFs.available, available: 9624040228, capacity: 10198Mi
Nov 15 02:00:21 af867b kubelet[27751]: I1115 02:00:21.228939 27751 helpers.go:871] eviction manager: observations: signal=memory.available, available: 6617280Ki, capacity: 7393360Ki, time: 2017-11-15 02:00:05.296675797 +0000 UTC
Nov 15 02:00:21 af867b kubelet[27751]: I1115 02:00:21.228966 27751 helpers.go:871] eviction manager: observations: signal=nodefs.available, available: 7420096Ki, capacity: 10198Mi, time: 2017-11-15 02:00:05.296675797 +0000 UTC
Nov 15 02:00:21 af867b kubelet[27751]: I1115 02:00:21.228976 27751 helpers.go:871] eviction manager: observations: signal=nodefs.inodesFree, available: 10384831, capacity: 10208Ki, time: 2017-11-15 02:00:05.296675797 +0000 UTC
Nov 15 02:00:21 af867b kubelet[27751]: I1115 02:00:21.228986 27751 helpers.go:871] eviction manager: observations: signal=imagefs.available, available: 40107Mi, capacity: 45Gi, time: 2017-11-15 02:00:05.296675797 +0000 UTC
Nov 15 02:00:21 af867b kubelet[27751]: I1115 02:00:21.229009 27751 eviction_manager.go:325] eviction manager: no resources are starved
Nov 15 02:00:22 af867b kubelet[27751]: I1115 02:00:22.523505 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:00:22 af867b kubelet[27751]: I1115 02:00:22.961802 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 2379, Path: /health
Nov 15 02:00:22 af867b kubelet[27751]: I1115 02:00:22.961848 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:00:23 af867b kubelet[27751]: I1115 02:00:23.470109 27751 http.go:96] Probe succeeded for http://127.0.0.1:2379/health, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:00:23 GMT] Content-Length:[18] Content-Type:[text/plain; charset=utf-8]] 0xc421438720 18 [] true false map[] 0xc420ee3100 <nil>}
Nov 15 02:00:23 af867b kubelet[27751]: I1115 02:00:23.470171 27751 prober.go:113] Liveness probe for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250):etcd" succeeded
Nov 15 02:00:23 af867b kubelet[27751]: I1115 02:00:23.617358 27751 prober.go:160] HTTP-Probe Host: https://127.0.0.1, Port: 6443, Path: /healthz
Nov 15 02:00:23 af867b kubelet[27751]: I1115 02:00:23.617401 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:00:23 af867b kubelet[27751]: I1115 02:00:23.624664 27751 http.go:96] Probe succeeded for https://127.0.0.1:6443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Length:[2] Date:[Wed, 15 Nov 2017 02:00:23 GMT] Content-Type:[text/plain; charset=utf-8]] 0xc420f64580 2 [] false false map[] 0xc420d4be00 0xc42131e790}
Nov 15 02:00:23 af867b kubelet[27751]: I1115 02:00:23.624732 27751 prober.go:113] Liveness probe for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d):kube-apiserver" succeeded
Nov 15 02:00:24 af867b kubelet[27751]: I1115 02:00:24.523808 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:00:24 af867b kubelet[27751]: I1115 02:00:24.790344 27751 kube_docker_client.go:330] Pulling image "weaveworks/weave-kube:2.0.5": "27c6c140ede3: Extracting [====================================> ] 5.407MB/7.44MB"
Nov 15 02:00:25 af867b kubelet[27751]: I1115 02:00:25.362712 27751 worker.go:164] Probe target container not found: weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242) - weave
Nov 15 02:00:25 af867b kubelet[27751]: W1115 02:00:25.844428 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 02:00:25 af867b kubelet[27751]: I1115 02:00:25.844571 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 02:00:25 af867b kubelet[27751]: E1115 02:00:25.844594 27751 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 02:00:26 af867b kubelet[27751]: I1115 02:00:26.523483 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:00:27 af867b kubelet[27751]: I1115 02:00:27.506176 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10252, Path: /healthz
Nov 15 02:00:27 af867b kubelet[27751]: I1115 02:00:27.506227 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:00:27 af867b kubelet[27751]: I1115 02:00:27.510746 27751 http.go:96] Probe succeeded for http://127.0.0.1:10252/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:00:27 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc421bf5060 2 [] true false map[] 0xc421179100 <nil>}
Nov 15 02:00:27 af867b kubelet[27751]: I1115 02:00:27.510818 27751 prober.go:113] Liveness probe for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9):kube-controller-manager" succeeded
Nov 15 02:00:27 af867b kubelet[27751]: I1115 02:00:27.681816 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10251, Path: /healthz
Nov 15 02:00:27 af867b kubelet[27751]: I1115 02:00:27.681858 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:00:27 af867b kubelet[27751]: I1115 02:00:27.682690 27751 http.go:96] Probe succeeded for http://127.0.0.1:10251/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Length:[2] Content-Type:[text/plain; charset=utf-8] Date:[Wed, 15 Nov 2017 02:00:27 GMT]] 0xc421bf52c0 2 [] true false map[] 0xc420a31f00 <nil>}
Nov 15 02:00:27 af867b kubelet[27751]: I1115 02:00:27.682761 27751 prober.go:113] Liveness probe for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373):kube-scheduler" succeeded
Nov 15 02:00:28 af867b kubelet[27751]: I1115 02:00:28.523555 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:00:30 af867b kubelet[27751]: I1115 02:00:30.523545 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:00:30 af867b kubelet[27751]: W1115 02:00:30.845392 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 02:00:30 af867b kubelet[27751]: I1115 02:00:30.845496 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 02:00:30 af867b kubelet[27751]: E1115 02:00:30.845520 27751 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 02:00:31 af867b kubelet[27751]: I1115 02:00:31.229267 27751 eviction_manager.go:221] eviction manager: synchronize housekeeping
Nov 15 02:00:31 af867b kubelet[27751]: I1115 02:00:31.291198 27751 helpers.go:871] eviction manager: observations: signal=memory.available, available: 6606584Ki, capacity: 7393360Ki, time: 2017-11-15 02:00:22.740279886 +0000 UTC
Nov 15 02:00:31 af867b kubelet[27751]: I1115 02:00:31.291241 27751 helpers.go:871] eviction manager: observations: signal=nodefs.available, available: 7399360Ki, capacity: 10198Mi, time: 2017-11-15 02:00:22.740279886 +0000 UTC
Nov 15 02:00:31 af867b kubelet[27751]: I1115 02:00:31.291253 27751 helpers.go:871] eviction manager: observations: signal=nodefs.inodesFree, available: 10384791, capacity: 10208Ki, time: 2017-11-15 02:00:22.740279886 +0000 UTC
Nov 15 02:00:31 af867b kubelet[27751]: I1115 02:00:31.291263 27751 helpers.go:871] eviction manager: observations: signal=imagefs.available, available: 41037312Ki, capacity: 45Gi, time: 2017-11-15 02:00:22.740279886 +0000 UTC
Nov 15 02:00:31 af867b kubelet[27751]: I1115 02:00:31.291272 27751 helpers.go:873] eviction manager: observations: signal=allocatableMemory.available, available: 7038388Ki, capacity: 7393360Ki
Nov 15 02:00:31 af867b kubelet[27751]: I1115 02:00:31.291279 27751 helpers.go:873] eviction manager: observations: signal=allocatableNodeFs.available, available: 9624040228, capacity: 10198Mi
Nov 15 02:00:31 af867b kubelet[27751]: I1115 02:00:31.291298 27751 eviction_manager.go:325] eviction manager: no resources are starved
Nov 15 02:00:32 af867b kubelet[27751]: I1115 02:00:32.523518 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:00:32 af867b kubelet[27751]: I1115 02:00:32.959993 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 2379, Path: /health
Nov 15 02:00:32 af867b kubelet[27751]: I1115 02:00:32.960040 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:00:33 af867b kubelet[27751]: I1115 02:00:33.461409 27751 http.go:96] Probe succeeded for http://127.0.0.1:2379/health, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:00:33 GMT] Content-Length:[18] Content-Type:[text/plain; charset=utf-8]] 0xc420d82480 18 [] true false map[] 0xc4200dd700 <nil>}
Nov 15 02:00:33 af867b kubelet[27751]: I1115 02:00:33.461483 27751 prober.go:113] Liveness probe for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250):etcd" succeeded
Nov 15 02:00:33 af867b kubelet[27751]: I1115 02:00:33.617291 27751 prober.go:160] HTTP-Probe Host: https://127.0.0.1, Port: 6443, Path: /healthz
Nov 15 02:00:33 af867b kubelet[27751]: I1115 02:00:33.617318 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:00:33 af867b kubelet[27751]: I1115 02:00:33.623565 27751 http.go:96] Probe succeeded for https://127.0.0.1:6443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Type:[text/plain; charset=utf-8] Content-Length:[2] Date:[Wed, 15 Nov 2017 02:00:33 GMT]] 0xc4209c55c0 2 [] false false map[] 0xc420d4ac00 0xc421867c30}
Nov 15 02:00:33 af867b kubelet[27751]: I1115 02:00:33.623600 27751 prober.go:113] Liveness probe for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d):kube-apiserver" succeeded
Nov 15 02:00:34 af867b kubelet[27751]: I1115 02:00:34.523633 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:00:34 af867b kubelet[27751]: I1115 02:00:34.790393 27751 kube_docker_client.go:330] Pulling image "weaveworks/weave-kube:2.0.5": "35577841e8d1: Downloading [===========> ] 2.391MB/10.09MB"
Nov 15 02:00:35 af867b kubelet[27751]: I1115 02:00:35.362720 27751 worker.go:164] Probe target container not found: weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242) - weave
Nov 15 02:00:35 af867b kubelet[27751]: W1115 02:00:35.846344 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 02:00:35 af867b kubelet[27751]: I1115 02:00:35.846496 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 02:00:35 af867b kubelet[27751]: E1115 02:00:35.846518 27751 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 02:00:36 af867b kubelet[27751]: I1115 02:00:36.523462 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:00:37 af867b kubelet[27751]: I1115 02:00:37.506115 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10252, Path: /healthz
Nov 15 02:00:37 af867b kubelet[27751]: I1115 02:00:37.506140 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:00:37 af867b kubelet[27751]: I1115 02:00:37.506840 27751 http.go:96] Probe succeeded for http://127.0.0.1:10252/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:00:37 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc421b39c40 2 [] true false map[] 0xc420d4b100 <nil>}
Nov 15 02:00:37 af867b kubelet[27751]: I1115 02:00:37.506874 27751 prober.go:113] Liveness probe for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9):kube-controller-manager" succeeded
Nov 15 02:00:37 af867b kubelet[27751]: I1115 02:00:37.681639 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10251, Path: /healthz
Nov 15 02:00:37 af867b kubelet[27751]: I1115 02:00:37.681667 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:00:37 af867b kubelet[27751]: I1115 02:00:37.682324 27751 http.go:96] Probe succeeded for http://127.0.0.1:10251/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Type:[text/plain; charset=utf-8] Date:[Wed, 15 Nov 2017 02:00:37 GMT] Content-Length:[2]] 0xc421c17160 2 [] true false map[] 0xc4200dd500 <nil>}
Nov 15 02:00:37 af867b kubelet[27751]: I1115 02:00:37.682360 27751 prober.go:113] Liveness probe for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373):kube-scheduler" succeeded
Nov 15 02:00:38 af867b kubelet[27751]: I1115 02:00:38.523449 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:00:40 af867b kubelet[27751]: I1115 02:00:40.523834 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:00:40 af867b kubelet[27751]: W1115 02:00:40.847393 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 02:00:40 af867b kubelet[27751]: I1115 02:00:40.847526 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 02:00:40 af867b kubelet[27751]: E1115 02:00:40.847559 27751 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 02:00:41 af867b kubelet[27751]: I1115 02:00:41.291487 27751 eviction_manager.go:221] eviction manager: synchronize housekeeping
Nov 15 02:00:41 af867b kubelet[27751]: I1115 02:00:41.336486 27751 helpers.go:871] eviction manager: observations: signal=memory.available, available: 6596220Ki, capacity: 7393360Ki, time: 2017-11-15 02:00:34.61443142 +0000 UTC
Nov 15 02:00:41 af867b kubelet[27751]: I1115 02:00:41.336524 27751 helpers.go:871] eviction manager: observations: signal=nodefs.available, available: 7415348Ki, capacity: 10198Mi, time: 2017-11-15 02:00:34.61443142 +0000 UTC
Nov 15 02:00:41 af867b kubelet[27751]: I1115 02:00:41.336535 27751 helpers.go:871] eviction manager: observations: signal=nodefs.inodesFree, available: 10384744, capacity: 10208Ki, time: 2017-11-15 02:00:34.61443142 +0000 UTC
Nov 15 02:00:41 af867b kubelet[27751]: I1115 02:00:41.336543 27751 helpers.go:871] eviction manager: observations: signal=imagefs.available, available: 39981Mi, capacity: 45Gi, time: 2017-11-15 02:00:34.61443142 +0000 UTC
Nov 15 02:00:41 af867b kubelet[27751]: I1115 02:00:41.336552 27751 helpers.go:873] eviction manager: observations: signal=allocatableMemory.available, available: 7038116Ki, capacity: 7393360Ki
Nov 15 02:00:41 af867b kubelet[27751]: I1115 02:00:41.336559 27751 helpers.go:873] eviction manager: observations: signal=allocatableNodeFs.available, available: 9624040228, capacity: 10198Mi
Nov 15 02:00:41 af867b kubelet[27751]: I1115 02:00:41.336577 27751 eviction_manager.go:325] eviction manager: no resources are starved
Nov 15 02:00:42 af867b kubelet[27751]: I1115 02:00:42.523536 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:00:42 af867b kubelet[27751]: I1115 02:00:42.959963 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 2379, Path: /health
Nov 15 02:00:42 af867b kubelet[27751]: I1115 02:00:42.959993 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:00:43 af867b kubelet[27751]: I1115 02:00:43.464519 27751 http.go:96] Probe succeeded for http://127.0.0.1:2379/health, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:00:43 GMT] Content-Length:[18] Content-Type:[text/plain; charset=utf-8]] 0xc421329ee0 18 [] true false map[] 0xc420a31900 <nil>}
Nov 15 02:00:43 af867b kubelet[27751]: I1115 02:00:43.464644 27751 prober.go:113] Liveness probe for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250):etcd" succeeded
Nov 15 02:00:43 af867b kubelet[27751]: I1115 02:00:43.617323 27751 prober.go:160] HTTP-Probe Host: https://127.0.0.1, Port: 6443, Path: /healthz
Nov 15 02:00:43 af867b kubelet[27751]: I1115 02:00:43.617362 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:00:43 af867b kubelet[27751]: I1115 02:00:43.624374 27751 http.go:96] Probe succeeded for https://127.0.0.1:6443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Type:[text/plain; charset=utf-8] Content-Length:[2] Date:[Wed, 15 Nov 2017 02:00:43 GMT]] 0xc420db2160 2 [] false false map[] 0xc420a31b00 0xc4212f5ce0}
Nov 15 02:00:43 af867b kubelet[27751]: I1115 02:00:43.624418 27751 prober.go:113] Liveness probe for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d):kube-apiserver" succeeded
Nov 15 02:00:44 af867b kubelet[27751]: I1115 02:00:44.523474 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:00:44 af867b kubelet[27751]: I1115 02:00:44.790319 27751 kube_docker_client.go:330] Pulling image "weaveworks/weave-kube:2.0.5": "35577841e8d1: Downloading [===================> ] 3.997MB/10.09MB"
Nov 15 02:00:45 af867b kubelet[27751]: I1115 02:00:45.362891 27751 worker.go:164] Probe target container not found: weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242) - weave
Nov 15 02:00:45 af867b kubelet[27751]: I1115 02:00:45.527527 27751 kubelet.go:1890] SyncLoop (SYNC): 1 pods; kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)
Nov 15 02:00:45 af867b kubelet[27751]: I1115 02:00:45.527626 27751 kubelet_pods.go:1284] Generating status for "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:00:45 af867b kubelet[27751]: I1115 02:00:45.527824 27751 status_manager.go:325] Ignoring same status for pod "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:59:18 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:59:20 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:59:20 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 01:59:18 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-proxy State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 01:59:20 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/google_containers/kube-proxy-amd64:v1.8.3 ImageID:docker-pullable://gcr.io/google_containers/kube-proxy-amd64@sha256:63210bc9690144d41126a646caf03a3d76ddc6d06b8bad119d468193c3e90c24 ContainerID:docker://7ea397ec6048e25ce044b9edba43fe0ef2ed54803c9f3516c7b6e780d2c60a34}] QOSClass:BestEffort}
Nov 15 02:00:45 af867b kubelet[27751]: I1115 02:00:45.528000 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:00:45 af867b kubelet[27751]: I1115 02:00:45.540039 27751 configmap.go:187] Setting up volume kube-proxy for pod 9729c03a-c9a8-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/9729c03a-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~configmap/kube-proxy
Nov 15 02:00:45 af867b kubelet[27751]: I1115 02:00:45.540392 27751 secret.go:186] Setting up volume kube-proxy-token-gqhfs for pod 9729c03a-c9a8-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/9729c03a-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/kube-proxy-token-gqhfs
Nov 15 02:00:45 af867b kubelet[27751]: I1115 02:00:45.542843 27751 secret.go:217] Received secret kube-system/kube-proxy-token-gqhfs containing (3) pieces of data, 1904 total bytes
Nov 15 02:00:45 af867b kubelet[27751]: I1115 02:00:45.542983 27751 atomic_writer.go:142] pod kube-system/kube-proxy-nnsjf volume kube-proxy-token-gqhfs: no update required for target directory /var/lib/kubelet/pods/9729c03a-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/kube-proxy-token-gqhfs
Nov 15 02:00:45 af867b kubelet[27751]: I1115 02:00:45.543073 27751 configmap.go:218] Received configMap kube-system/kube-proxy containing (1) pieces of data, 407 total bytes
Nov 15 02:00:45 af867b kubelet[27751]: I1115 02:00:45.543151 27751 atomic_writer.go:142] pod kube-system/kube-proxy-nnsjf volume kube-proxy: no update required for target directory /var/lib/kubelet/pods/9729c03a-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~configmap/kube-proxy
Nov 15 02:00:45 af867b kubelet[27751]: I1115 02:00:45.828275 27751 volume_manager.go:366] All volumes are attached and mounted for pod "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:00:45 af867b kubelet[27751]: I1115 02:00:45.828410 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:91510886f3beb621e5d04309d502352ff78392e46405f21633802da4f7047069 Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:00:45 af867b kubelet[27751]: W1115 02:00:45.849136 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 02:00:45 af867b kubelet[27751]: I1115 02:00:45.849285 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 02:00:45 af867b kubelet[27751]: E1115 02:00:45.849303 27751 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 02:00:46 af867b kubelet[27751]: I1115 02:00:46.523460 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:00:47 af867b kubelet[27751]: I1115 02:00:47.506130 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10252, Path: /healthz
Nov 15 02:00:47 af867b kubelet[27751]: I1115 02:00:47.506165 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:00:47 af867b kubelet[27751]: I1115 02:00:47.507081 27751 http.go:96] Probe succeeded for http://127.0.0.1:10252/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Length:[2] Content-Type:[text/plain; charset=utf-8] Date:[Wed, 15 Nov 2017 02:00:47 GMT]] 0xc421a53d80 2 [] true false map[] 0xc42110ae00 <nil>}
Nov 15 02:00:47 af867b kubelet[27751]: I1115 02:00:47.507119 27751 prober.go:113] Liveness probe for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9):kube-controller-manager" succeeded
Nov 15 02:00:47 af867b kubelet[27751]: I1115 02:00:47.684180 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10251, Path: /healthz
Nov 15 02:00:47 af867b kubelet[27751]: I1115 02:00:47.684225 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:00:47 af867b kubelet[27751]: I1115 02:00:47.685836 27751 http.go:96] Probe succeeded for http://127.0.0.1:10251/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:00:47 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc4219eda60 2 [] true false map[] 0xc420afcd00 <nil>}
Nov 15 02:00:47 af867b kubelet[27751]: I1115 02:00:47.685893 27751 prober.go:113] Liveness probe for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373):kube-scheduler" succeeded
Nov 15 02:00:48 af867b kubelet[27751]: I1115 02:00:48.523431 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.523449 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.528897 27751 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.549933 27751 kubelet.go:1222] Container garbage collection succeeded
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.825834 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/u01-applicationSpace.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.825880 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/u01-applicationSpace.mount", but ignoring.
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.825894 27751 manager.go:901] ignoring container "/system.slice/u01-applicationSpace.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.825903 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/run-docker-netns-default.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.825911 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/run-docker-netns-default.mount", but ignoring.
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.825921 27751 manager.go:901] ignoring container "/system.slice/run-docker-netns-default.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.825928 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-9d97c593c8a0b1ef2442048619ba4466fad52c408e686c9285a48450f3a6669f.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.825938 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-9d97c593c8a0b1ef2442048619ba4466fad52c408e686c9285a48450f3a6669f.mount", but ignoring.
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.825948 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-9d97c593c8a0b1ef2442048619ba4466fad52c408e686c9285a48450f3a6669f.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.825957 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/boot.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.825964 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/boot.mount", but ignoring.
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.825972 27751 manager.go:901] ignoring container "/system.slice/boot.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.825979 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-cd27b64716d8032aa3ea595ec121a222dc749f2f903149a780eb711106fb0c53.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.825988 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-cd27b64716d8032aa3ea595ec121a222dc749f2f903149a780eb711106fb0c53.mount", but ignoring.
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.825998 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-cd27b64716d8032aa3ea595ec121a222dc749f2f903149a780eb711106fb0c53.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826006 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-kubelet-pods-9729c03a\\x2dc9a8\\x2d11e7\\x2d89f4\\x2dc6b053eac242-volumes-kubernetes.io\\x7esecret-kube\\x2dproxy\\x2dtoken\\x2dgqhfs.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826026 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-kubelet-pods-9729c03a\\x2dc9a8\\x2d11e7\\x2d89f4\\x2dc6b053eac242-volumes-kubernetes.io\\x7esecret-kube\\x2dproxy\\x2dtoken\\x2dgqhfs.mount", but ignoring.
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826040 27751 manager.go:901] ignoring container "/system.slice/var-lib-kubelet-pods-9729c03a\\x2dc9a8\\x2d11e7\\x2d89f4\\x2dc6b053eac242-volumes-kubernetes.io\\x7esecret-kube\\x2dproxy\\x2dtoken\\x2dgqhfs.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826051 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/-.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826058 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/-.mount", but ignoring.
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826067 27751 manager.go:901] ignoring container "/system.slice/-.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826073 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-439f518d46ec0d596518b7c17c503c52a85dd4da21d8a56ec174d5b6b3b98e03-shm.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826082 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-439f518d46ec0d596518b7c17c503c52a85dd4da21d8a56ec174d5b6b3b98e03-shm.mount", but ignoring.
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826092 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-439f518d46ec0d596518b7c17c503c52a85dd4da21d8a56ec174d5b6b3b98e03-shm.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826100 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-fe18d337ce42bd3a4d2aacb1349be028274681d8073fbe5d6377946e815fa810-shm.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826109 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-fe18d337ce42bd3a4d2aacb1349be028274681d8073fbe5d6377946e815fa810-shm.mount", but ignoring.
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826120 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-fe18d337ce42bd3a4d2aacb1349be028274681d8073fbe5d6377946e815fa810-shm.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826128 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-d5076ad0e9fb270d1b8c4ff7cdadbf32d6b1e30dc42ae24dfbc5cb01bb5aa934-shm.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826137 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-d5076ad0e9fb270d1b8c4ff7cdadbf32d6b1e30dc42ae24dfbc5cb01bb5aa934-shm.mount", but ignoring.
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826147 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-d5076ad0e9fb270d1b8c4ff7cdadbf32d6b1e30dc42ae24dfbc5cb01bb5aa934-shm.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826155 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-82408ec054eef4c0bce3519663bc7e5cbaff7a4fea1d8b7490fa3cc5a74bac60.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826361 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-82408ec054eef4c0bce3519663bc7e5cbaff7a4fea1d8b7490fa3cc5a74bac60.mount", but ignoring.
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826384 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-82408ec054eef4c0bce3519663bc7e5cbaff7a4fea1d8b7490fa3cc5a74bac60.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826394 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/run-user-1000.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826402 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/run-user-1000.mount", but ignoring.
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826411 27751 manager.go:901] ignoring container "/system.slice/run-user-1000.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826417 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-984a35c7cf6d04c0af5fcd260872a7b6556aa9cd8b38bc1a1cf635972c1d6511.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826426 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-984a35c7cf6d04c0af5fcd260872a7b6556aa9cd8b38bc1a1cf635972c1d6511.mount", but ignoring.
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826437 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-984a35c7cf6d04c0af5fcd260872a7b6556aa9cd8b38bc1a1cf635972c1d6511.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826445 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/sys-kernel-debug.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826452 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/sys-kernel-debug.mount", but ignoring.
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826461 27751 manager.go:901] ignoring container "/system.slice/sys-kernel-debug.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826468 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/dev-hugepages.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826476 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/dev-hugepages.mount", but ignoring.
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826484 27751 manager.go:901] ignoring container "/system.slice/dev-hugepages.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826490 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-23d8892c53722cfb4d3aa6c5b126ad2674a7b8041b4873d30b98faefad6f5b44.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826499 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-23d8892c53722cfb4d3aa6c5b126ad2674a7b8041b4873d30b98faefad6f5b44.mount", but ignoring.
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826510 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-23d8892c53722cfb4d3aa6c5b126ad2674a7b8041b4873d30b98faefad6f5b44.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826518 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-8c430fd9b74591760bf92ca717db0989293743473d027f75b1dfded4d0661504.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826528 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-8c430fd9b74591760bf92ca717db0989293743473d027f75b1dfded4d0661504.mount", but ignoring.
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826539 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-8c430fd9b74591760bf92ca717db0989293743473d027f75b1dfded4d0661504.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826547 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826555 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper.mount", but ignoring.
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826563 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826570 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/dev-mqueue.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826577 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/dev-mqueue.mount", but ignoring.
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826586 27751 manager.go:901] ignoring container "/system.slice/dev-mqueue.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826594 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-91510886f3beb621e5d04309d502352ff78392e46405f21633802da4f7047069-shm.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826603 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-91510886f3beb621e5d04309d502352ff78392e46405f21633802da4f7047069-shm.mount", but ignoring.
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826614 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-91510886f3beb621e5d04309d502352ff78392e46405f21633802da4f7047069-shm.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826622 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-7d2ebdadb8c541a4839058dd46b491ef9e14209821b47e7d86b539de402fbef5.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826632 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-7d2ebdadb8c541a4839058dd46b491ef9e14209821b47e7d86b539de402fbef5.mount", but ignoring.
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826673 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-7d2ebdadb8c541a4839058dd46b491ef9e14209821b47e7d86b539de402fbef5.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826683 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-29c1400e918b28b9982788b4c8b07c46b1767efcd8b96000e44bc78da639994b.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826692 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-29c1400e918b28b9982788b4c8b07c46b1767efcd8b96000e44bc78da639994b.mount", but ignoring.
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826702 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-29c1400e918b28b9982788b4c8b07c46b1767efcd8b96000e44bc78da639994b.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826727 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-6ddc96a63fb65ccd1743152170b39274e0eb3e4ccc4948bf2c0f5a6130ee9ce7.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826738 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-6ddc96a63fb65ccd1743152170b39274e0eb3e4ccc4948bf2c0f5a6130ee9ce7.mount", but ignoring.
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826748 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-6ddc96a63fb65ccd1743152170b39274e0eb3e4ccc4948bf2c0f5a6130ee9ce7.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826757 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/sys-kernel-config.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826764 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/sys-kernel-config.mount", but ignoring.
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826772 27751 manager.go:901] ignoring container "/system.slice/sys-kernel-config.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826779 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-faffc5ce0cf0da39c68377dff0a611eb436ee74dfce11692aecd1e723311ef80.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826788 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-faffc5ce0cf0da39c68377dff0a611eb436ee74dfce11692aecd1e723311ef80.mount", but ignoring.
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826798 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-faffc5ce0cf0da39c68377dff0a611eb436ee74dfce11692aecd1e723311ef80.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826807 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/proc-xen.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826814 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/proc-xen.mount", but ignoring.
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826823 27751 manager.go:901] ignoring container "/system.slice/proc-xen.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826830 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-19b06ca14052c17f92bf03d5316fd7642fe4fee6bf193755575355be3e87a13d-shm.mount"
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826839 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-19b06ca14052c17f92bf03d5316fd7642fe4fee6bf193755575355be3e87a13d-shm.mount", but ignoring.
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.826849 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-19b06ca14052c17f92bf03d5316fd7642fe4fee6bf193755575355be3e87a13d-shm.mount"
Nov 15 02:00:50 af867b kubelet[27751]: W1115 02:00:50.850448 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 02:00:50 af867b kubelet[27751]: I1115 02:00:50.850612 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 02:00:50 af867b kubelet[27751]: E1115 02:00:50.850635 27751 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 02:00:51 af867b kubelet[27751]: I1115 02:00:51.336801 27751 eviction_manager.go:221] eviction manager: synchronize housekeeping
Nov 15 02:00:51 af867b kubelet[27751]: I1115 02:00:51.402631 27751 helpers.go:871] eviction manager: observations: signal=memory.available, available: 6593816Ki, capacity: 7393360Ki, time: 2017-11-15 02:00:47.870483701 +0000 UTC
Nov 15 02:00:51 af867b kubelet[27751]: I1115 02:00:51.402672 27751 helpers.go:871] eviction manager: observations: signal=nodefs.available, available: 7411340Ki, capacity: 10198Mi, time: 2017-11-15 02:00:47.870483701 +0000 UTC
Nov 15 02:00:51 af867b kubelet[27751]: I1115 02:00:51.402693 27751 helpers.go:871] eviction manager: observations: signal=nodefs.inodesFree, available: 10384744, capacity: 10208Ki, time: 2017-11-15 02:00:47.870483701 +0000 UTC
Nov 15 02:00:51 af867b kubelet[27751]: I1115 02:00:51.402702 27751 helpers.go:871] eviction manager: observations: signal=imagefs.available, available: 39981Mi, capacity: 45Gi, time: 2017-11-15 02:00:47.870483701 +0000 UTC
Nov 15 02:00:51 af867b kubelet[27751]: I1115 02:00:51.402774 27751 helpers.go:873] eviction manager: observations: signal=allocatableMemory.available, available: 7037856Ki, capacity: 7393360Ki
Nov 15 02:00:51 af867b kubelet[27751]: I1115 02:00:51.402783 27751 helpers.go:873] eviction manager: observations: signal=allocatableNodeFs.available, available: 9624040228, capacity: 10198Mi
Nov 15 02:00:51 af867b kubelet[27751]: I1115 02:00:51.402805 27751 eviction_manager.go:325] eviction manager: no resources are starved
Nov 15 02:00:52 af867b kubelet[27751]: I1115 02:00:52.523542 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:00:52 af867b kubelet[27751]: I1115 02:00:52.959978 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 2379, Path: /health
Nov 15 02:00:52 af867b kubelet[27751]: I1115 02:00:52.960009 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:00:53 af867b kubelet[27751]: I1115 02:00:53.461856 27751 http.go:96] Probe succeeded for http://127.0.0.1:2379/health, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:00:53 GMT] Content-Length:[18] Content-Type:[text/plain; charset=utf-8]] 0xc4209c4ee0 18 [] true false map[] 0xc4200dd500 <nil>}
Nov 15 02:00:53 af867b kubelet[27751]: I1115 02:00:53.461970 27751 prober.go:113] Liveness probe for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250):etcd" succeeded
Nov 15 02:00:53 af867b kubelet[27751]: I1115 02:00:53.617356 27751 prober.go:160] HTTP-Probe Host: https://127.0.0.1, Port: 6443, Path: /healthz
Nov 15 02:00:53 af867b kubelet[27751]: I1115 02:00:53.617390 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:00:53 af867b kubelet[27751]: I1115 02:00:53.624871 27751 http.go:96] Probe succeeded for https://127.0.0.1:6443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Type:[text/plain; charset=utf-8] Content-Length:[2] Date:[Wed, 15 Nov 2017 02:00:53 GMT]] 0xc420aa7380 2 [] false false map[] 0xc42110b400 0xc420cc6f20}
Nov 15 02:00:53 af867b kubelet[27751]: I1115 02:00:53.624917 27751 prober.go:113] Liveness probe for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d):kube-apiserver" succeeded
Nov 15 02:00:54 af867b kubelet[27751]: I1115 02:00:54.523438 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:00:54 af867b kubelet[27751]: I1115 02:00:54.790402 27751 kube_docker_client.go:330] Pulling image "weaveworks/weave-kube:2.0.5": "35577841e8d1: Downloading [====================================> ] 7.437MB/10.09MB"
Nov 15 02:00:55 af867b kubelet[27751]: I1115 02:00:55.362734 27751 worker.go:164] Probe target container not found: weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242) - weave
Nov 15 02:00:55 af867b kubelet[27751]: W1115 02:00:55.851696 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 02:00:55 af867b kubelet[27751]: I1115 02:00:55.851903 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 02:00:55 af867b kubelet[27751]: E1115 02:00:55.851927 27751 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 02:00:56 af867b kubelet[27751]: I1115 02:00:56.523491 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:00:57 af867b kubelet[27751]: I1115 02:00:57.506133 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10252, Path: /healthz
Nov 15 02:00:57 af867b kubelet[27751]: I1115 02:00:57.506169 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:00:57 af867b kubelet[27751]: I1115 02:00:57.507077 27751 http.go:96] Probe succeeded for http://127.0.0.1:10252/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:00:57 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc4219866a0 2 [] true false map[] 0xc421178600 <nil>}
Nov 15 02:00:57 af867b kubelet[27751]: I1115 02:00:57.507121 27751 prober.go:113] Liveness probe for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9):kube-controller-manager" succeeded
Nov 15 02:00:57 af867b kubelet[27751]: I1115 02:00:57.681659 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10251, Path: /healthz
Nov 15 02:00:57 af867b kubelet[27751]: I1115 02:00:57.681697 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:00:57 af867b kubelet[27751]: I1115 02:00:57.683085 27751 http.go:96] Probe succeeded for http://127.0.0.1:10251/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:00:57 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc421986900 2 [] true false map[] 0xc421178800 <nil>}
Nov 15 02:00:57 af867b kubelet[27751]: I1115 02:00:57.683142 27751 prober.go:113] Liveness probe for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373):kube-scheduler" succeeded
Nov 15 02:00:58 af867b kubelet[27751]: I1115 02:00:58.523549 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:01:00 af867b kubelet[27751]: I1115 02:01:00.524535 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:01:00 af867b kubelet[27751]: I1115 02:01:00.854771 27751 kube_docker_client.go:333] Stop pulling image "weaveworks/weave-kube:2.0.5": "Status: Downloaded newer image for weaveworks/weave-kube:2.0.5"
Nov 15 02:01:00 af867b kubelet[27751]: W1115 02:01:00.857891 27751 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 15 02:01:00 af867b kubelet[27751]: I1115 02:01:00.858016 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 02:01:00 af867b kubelet[27751]: E1115 02:01:00.858044 27751 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 15 02:01:00 af867b kubelet[27751]: I1115 02:01:00.858225 27751 kuberuntime_container.go:100] Generating ref for container weave: &v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"weave-net-rg7fn", UID:"b77b0858-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"381", FieldPath:"spec.containers{weave}"}
Nov 15 02:01:00 af867b kubelet[27751]: I1115 02:01:00.858263 27751 container_manager_linux.go:634] Calling devicePluginHandler AllocateDevices
Nov 15 02:01:00 af867b kubelet[27751]: I1115 02:01:00.858326 27751 kubelet_pods.go:123] container: kube-system/weave-net-rg7fn/weave podIP: "10.196.65.210" creating hosts mount: true
Nov 15 02:01:00 af867b kubelet[27751]: I1115 02:01:00.861488 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"weave-net-rg7fn", UID:"b77b0858-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"381", FieldPath:"spec.containers{weave}"}): type: 'Normal' reason: 'Pulled' Successfully pulled image "weaveworks/weave-kube:2.0.5"
Nov 15 02:01:00 af867b kubelet[27751]: I1115 02:01:00.864452 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/burstable/podb77b0858-c9a8-11e7-89f4-c6b053eac242"
Nov 15 02:01:01 af867b kubelet[27751]: I1115 02:01:01.147880 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"weave-net-rg7fn", UID:"b77b0858-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"381", FieldPath:"spec.containers{weave}"}): type: 'Normal' reason: 'Created' Created container
Nov 15 02:01:01 af867b kubelet[27751]: I1115 02:01:01.341481 27751 factory.go:112] Using factory "docker" for container "/kubepods/burstable/podb77b0858-c9a8-11e7-89f4-c6b053eac242/f4fa9d4e3be52b17cedfe3b19a153e4584737c95ada75d64ea13225dde740f40"
Nov 15 02:01:01 af867b kubelet[27751]: I1115 02:01:01.344400 27751 kuberuntime_manager.go:705] Creating container &Container{Name:weave-npc,Image:weaveworks/weave-npc:2.0.5,Command:[],Args:[],WorkingDir:,Ports:[],Env:[{HOSTNAME EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {<nil>} 10m DecimalSI},},},VolumeMounts:[{xtables-lock false /run/xtables.lock <nil>} {weave-net-token-rn6j7 true /var/run/secrets/kubernetes.io/serviceaccount <nil>}],LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[],TerminationMessagePolicy:File,} in pod weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)
Nov 15 02:01:01 af867b kubelet[27751]: I1115 02:01:01.344601 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"weave-net-rg7fn", UID:"b77b0858-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"381", FieldPath:"spec.containers{weave}"}): type: 'Normal' reason: 'Started' Started container
Nov 15 02:01:01 af867b kubelet[27751]: I1115 02:01:01.348553 27751 kuberuntime_image.go:46] Pulling image "weaveworks/weave-npc:2.0.5" without credentials
Nov 15 02:01:01 af867b kubelet[27751]: I1115 02:01:01.348644 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"weave-net-rg7fn", UID:"b77b0858-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"381", FieldPath:"spec.containers{weave-npc}"}): type: 'Normal' reason: 'Pulling' pulling image "weaveworks/weave-npc:2.0.5"
Nov 15 02:01:01 af867b kubelet[27751]: I1115 02:01:01.350246 27751 manager.go:932] Added container: "/kubepods/burstable/podb77b0858-c9a8-11e7-89f4-c6b053eac242/f4fa9d4e3be52b17cedfe3b19a153e4584737c95ada75d64ea13225dde740f40" (aliases: [k8s_weave_weave-net-rg7fn_kube-system_b77b0858-c9a8-11e7-89f4-c6b053eac242_0 f4fa9d4e3be52b17cedfe3b19a153e4584737c95ada75d64ea13225dde740f40], namespace: "docker")
Nov 15 02:01:01 af867b kubelet[27751]: I1115 02:01:01.350399 27751 handler.go:325] Added event &{/kubepods/burstable/podb77b0858-c9a8-11e7-89f4-c6b053eac242/f4fa9d4e3be52b17cedfe3b19a153e4584737c95ada75d64ea13225dde740f40 2017-11-15 02:01:01.256194711 +0000 UTC containerCreation {<nil>}}
Nov 15 02:01:01 af867b kubelet[27751]: I1115 02:01:01.350437 27751 container.go:409] Start housekeeping for container "/kubepods/burstable/podb77b0858-c9a8-11e7-89f4-c6b053eac242/f4fa9d4e3be52b17cedfe3b19a153e4584737c95ada75d64ea13225dde740f40"
Nov 15 02:01:01 af867b kubelet[27751]: I1115 02:01:01.403018 27751 eviction_manager.go:221] eviction manager: synchronize housekeeping
Nov 15 02:01:01 af867b kubelet[27751]: I1115 02:01:01.451478 27751 helpers.go:871] eviction manager: observations: signal=nodefs.inodesFree, available: 10384744, capacity: 10208Ki, time: 2017-11-15 02:00:47.870483701 +0000 UTC
Nov 15 02:01:01 af867b kubelet[27751]: I1115 02:01:01.451515 27751 helpers.go:871] eviction manager: observations: signal=imagefs.available, available: 39981Mi, capacity: 45Gi, time: 2017-11-15 02:00:47.870483701 +0000 UTC
Nov 15 02:01:01 af867b kubelet[27751]: I1115 02:01:01.451525 27751 helpers.go:873] eviction manager: observations: signal=allocatableMemory.available, available: 7035680Ki, capacity: 7393360Ki
Nov 15 02:01:01 af867b kubelet[27751]: I1115 02:01:01.451532 27751 helpers.go:873] eviction manager: observations: signal=allocatableNodeFs.available, available: 9624040228, capacity: 10198Mi
Nov 15 02:01:01 af867b kubelet[27751]: I1115 02:01:01.451539 27751 helpers.go:871] eviction manager: observations: signal=memory.available, available: 6593816Ki, capacity: 7393360Ki, time: 2017-11-15 02:00:47.870483701 +0000 UTC
Nov 15 02:01:01 af867b kubelet[27751]: I1115 02:01:01.451546 27751 helpers.go:871] eviction manager: observations: signal=nodefs.available, available: 7411340Ki, capacity: 10198Mi, time: 2017-11-15 02:00:47.870483701 +0000 UTC
Nov 15 02:01:01 af867b kubelet[27751]: I1115 02:01:01.451567 27751 eviction_manager.go:325] eviction manager: no resources are starved
Nov 15 02:01:01 af867b kubelet[27751]: I1115 02:01:01.923699 27751 generic.go:146] GenericPLEG: b77b0858-c9a8-11e7-89f4-c6b053eac242/f4fa9d4e3be52b17cedfe3b19a153e4584737c95ada75d64ea13225dde740f40: non-existent -> running
Nov 15 02:01:01 af867b kubelet[27751]: I1115 02:01:01.924574 27751 kuberuntime_manager.go:833] getSandboxIDByPodUID got sandbox IDs ["be14b0552100f929052c8e94e1b7cf587c4218e321d811314e58f2cf312f1c0f"] for pod "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:01:01 af867b kubelet[27751]: I1115 02:01:01.929035 27751 generic.go:345] PLEG: Write status for weave-net-rg7fn/kube-system: &container.PodStatus{ID:"b77b0858-c9a8-11e7-89f4-c6b053eac242", Name:"weave-net-rg7fn", Namespace:"kube-system", IP:"", ContainerStatuses:[]*container.ContainerStatus{(*container.ContainerStatus)(0xc42066eee0)}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc42126f1d0)}} (err: <nil>)
Nov 15 02:01:01 af867b kubelet[27751]: I1115 02:01:01.929097 27751 kubelet.go:1871] SyncLoop (PLEG): "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)", event: &pleg.PodLifecycleEvent{ID:"b77b0858-c9a8-11e7-89f4-c6b053eac242", Type:"ContainerStarted", Data:"f4fa9d4e3be52b17cedfe3b19a153e4584737c95ada75d64ea13225dde740f40"}
Nov 15 02:01:02 af867b kubelet[27751]: I1115 02:01:02.523539 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:01:02 af867b kubelet[27751]: I1115 02:01:02.959992 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 2379, Path: /health
Nov 15 02:01:02 af867b kubelet[27751]: I1115 02:01:02.960032 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:01:03 af867b kubelet[27751]: I1115 02:01:03.211105 27751 http.go:96] Probe succeeded for http://127.0.0.1:2379/health, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:01:03 GMT] Content-Length:[18] Content-Type:[text/plain; charset=utf-8]] 0xc420bb0400 18 [] true false map[] 0xc420a31900 <nil>}
Nov 15 02:01:03 af867b kubelet[27751]: I1115 02:01:03.211175 27751 prober.go:113] Liveness probe for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250):etcd" succeeded
Nov 15 02:01:03 af867b kubelet[27751]: I1115 02:01:03.617417 27751 prober.go:160] HTTP-Probe Host: https://127.0.0.1, Port: 6443, Path: /healthz
Nov 15 02:01:03 af867b kubelet[27751]: I1115 02:01:03.617482 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:01:03 af867b kubelet[27751]: I1115 02:01:03.632130 27751 http.go:96] Probe succeeded for https://127.0.0.1:6443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Type:[text/plain; charset=utf-8] Content-Length:[2] Date:[Wed, 15 Nov 2017 02:01:03 GMT]] 0xc420bd0100 2 [] false false map[] 0xc420a31b00 0xc421055ad0}
Nov 15 02:01:03 af867b kubelet[27751]: I1115 02:01:03.632179 27751 prober.go:113] Liveness probe for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d):kube-apiserver" succeeded
Nov 15 02:01:04 af867b kubelet[27751]: I1115 02:01:04.523448 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:01:05 af867b kubelet[27751]: I1115 02:01:05.362811 27751 worker.go:164] Probe target container not found: weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242) - weave
Nov 15 02:01:05 af867b kubelet[27751]: I1115 02:01:05.859700 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
Nov 15 02:01:06 af867b kubelet[27751]: I1115 02:01:06.523580 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:01:07 af867b kubelet[27751]: I1115 02:01:07.467368 27751 kubelet_node_status.go:443] Recording NodeReady event message for node af867b
Nov 15 02:01:07 af867b kubelet[27751]: I1115 02:01:07.471348 27751 server.go:227] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"af867b", UID:"af867b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeReady' Node af867b status is now: NodeReady
Nov 15 02:01:07 af867b kubelet[27751]: I1115 02:01:07.506146 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10252, Path: /healthz
Nov 15 02:01:07 af867b kubelet[27751]: I1115 02:01:07.506164 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:01:07 af867b kubelet[27751]: I1115 02:01:07.507102 27751 http.go:96] Probe succeeded for http://127.0.0.1:10252/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:01:07 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc421db46a0 2 [] true false map[] 0xc421179a00 <nil>}
Nov 15 02:01:07 af867b kubelet[27751]: I1115 02:01:07.507141 27751 prober.go:113] Liveness probe for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9):kube-controller-manager" succeeded
Nov 15 02:01:07 af867b kubelet[27751]: I1115 02:01:07.681617 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10251, Path: /healthz
Nov 15 02:01:07 af867b kubelet[27751]: I1115 02:01:07.681652 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:01:07 af867b kubelet[27751]: I1115 02:01:07.684314 27751 http.go:96] Probe succeeded for http://127.0.0.1:10251/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:01:07 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc421cff660 2 [] true false map[] 0xc421179c00 <nil>}
Nov 15 02:01:07 af867b kubelet[27751]: I1115 02:01:07.684356 27751 prober.go:113] Liveness probe for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373):kube-scheduler" succeeded
Nov 15 02:01:08 af867b kubelet[27751]: I1115 02:01:08.523515 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:01:10 af867b kubelet[27751]: I1115 02:01:10.523531 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:01:10 af867b kubelet[27751]: I1115 02:01:10.861176 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
Nov 15 02:01:11 af867b kubelet[27751]: I1115 02:01:11.451816 27751 eviction_manager.go:221] eviction manager: synchronize housekeeping
Nov 15 02:01:11 af867b kubelet[27751]: I1115 02:01:11.508453 27751 helpers.go:871] eviction manager: observations: signal=memory.available, available: 6533196Ki, capacity: 7393360Ki, time: 2017-11-15 02:01:06.674507567 +0000 UTC
Nov 15 02:01:11 af867b kubelet[27751]: I1115 02:01:11.508502 27751 helpers.go:871] eviction manager: observations: signal=nodefs.available, available: 7405248Ki, capacity: 10198Mi, time: 2017-11-15 02:01:06.674507567 +0000 UTC
Nov 15 02:01:11 af867b kubelet[27751]: I1115 02:01:11.508520 27751 helpers.go:871] eviction manager: observations: signal=nodefs.inodesFree, available: 10384695, capacity: 10208Ki, time: 2017-11-15 02:01:06.674507567 +0000 UTC
Nov 15 02:01:11 af867b kubelet[27751]: I1115 02:01:11.508531 27751 helpers.go:871] eviction manager: observations: signal=imagefs.available, available: 40868352Ki, capacity: 45Gi, time: 2017-11-15 02:01:06.674507567 +0000 UTC
Nov 15 02:01:11 af867b kubelet[27751]: I1115 02:01:11.508540 27751 helpers.go:873] eviction manager: observations: signal=allocatableMemory.available, available: 7035136Ki, capacity: 7393360Ki
Nov 15 02:01:11 af867b kubelet[27751]: I1115 02:01:11.508547 27751 helpers.go:873] eviction manager: observations: signal=allocatableNodeFs.available, available: 9624040228, capacity: 10198Mi
Nov 15 02:01:11 af867b kubelet[27751]: I1115 02:01:11.508570 27751 eviction_manager.go:325] eviction manager: no resources are starved
Nov 15 02:01:12 af867b kubelet[27751]: I1115 02:01:12.523551 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:01:12 af867b kubelet[27751]: I1115 02:01:12.709475 27751 kube_docker_client.go:330] Pulling image "weaveworks/weave-npc:2.0.5": "a2592a033c5d: Downloading [=======================> ] 5.291MB/11.27MB"
Nov 15 02:01:12 af867b kubelet[27751]: I1115 02:01:12.960044 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 2379, Path: /health
Nov 15 02:01:12 af867b kubelet[27751]: I1115 02:01:12.960101 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:01:13 af867b kubelet[27751]: I1115 02:01:13.213292 27751 http.go:96] Probe succeeded for http://127.0.0.1:2379/health, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:01:13 GMT] Content-Length:[18] Content-Type:[text/plain; charset=utf-8]] 0xc4212e0b60 18 [] true false map[] 0xc421178f00 <nil>}
Nov 15 02:01:13 af867b kubelet[27751]: I1115 02:01:13.213390 27751 prober.go:113] Liveness probe for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250):etcd" succeeded
Nov 15 02:01:13 af867b kubelet[27751]: I1115 02:01:13.617400 27751 prober.go:160] HTTP-Probe Host: https://127.0.0.1, Port: 6443, Path: /healthz
Nov 15 02:01:13 af867b kubelet[27751]: I1115 02:01:13.617462 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:01:13 af867b kubelet[27751]: I1115 02:01:13.631317 27751 http.go:96] Probe succeeded for https://127.0.0.1:6443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Type:[text/plain; charset=utf-8] Content-Length:[2] Date:[Wed, 15 Nov 2017 02:01:13 GMT]] 0xc420f64940 2 [] false false map[] 0xc420ee2300 0xc4213f0b00}
Nov 15 02:01:13 af867b kubelet[27751]: I1115 02:01:13.631374 27751 prober.go:113] Liveness probe for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d):kube-apiserver" succeeded
Nov 15 02:01:14 af867b kubelet[27751]: I1115 02:01:14.523544 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:01:15 af867b kubelet[27751]: I1115 02:01:15.362759 27751 worker.go:164] Probe target container not found: weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242) - weave
Nov 15 02:01:15 af867b kubelet[27751]: I1115 02:01:15.862921 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
Nov 15 02:01:16 af867b kubelet[27751]: I1115 02:01:16.524862 27751 kubelet.go:1890] SyncLoop (SYNC): 1 pods; kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)
Nov 15 02:01:16 af867b kubelet[27751]: I1115 02:01:16.524927 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:01:16 af867b kubelet[27751]: I1115 02:01:16.528595 27751 kubelet_pods.go:1284] Generating status for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 02:01:16 af867b kubelet[27751]: I1115 02:01:16.528816 27751 status_manager.go:325] Ignoring same status for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:58 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 01:58:55 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-controller-manager State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 01:58:57 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/google_containers/kube-controller-manager-amd64:v1.8.3 ImageID:docker-pullable://gcr.io/google_containers/kube-controller-manager-amd64@sha256:b6b633e3e107761d38fceb200f01bf552c51f65e3524b0aafc1a7710afff07be ContainerID:docker://272af7e3b2b1a203250d154349fdf77f296d7b7f65ce2c77b6b3a94e53dba356}] QOSClass:Burstable}
Nov 15 02:01:16 af867b kubelet[27751]: I1115 02:01:16.528971 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 02:01:16 af867b kubelet[27751]: I1115 02:01:16.829285 27751 volume_manager.go:366] All volumes are attached and mounted for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 02:01:16 af867b kubelet[27751]: I1115 02:01:16.829499 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:439f518d46ec0d596518b7c17c503c52a85dd4da21d8a56ec174d5b6b3b98e03 Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 02:01:17 af867b kubelet[27751]: I1115 02:01:17.506098 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10252, Path: /healthz
Nov 15 02:01:17 af867b kubelet[27751]: I1115 02:01:17.506126 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:01:17 af867b kubelet[27751]: I1115 02:01:17.507335 27751 http.go:96] Probe succeeded for http://127.0.0.1:10252/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:01:17 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc421deeee0 2 [] true false map[] 0xc420afd400 <nil>}
Nov 15 02:01:17 af867b kubelet[27751]: I1115 02:01:17.507377 27751 prober.go:113] Liveness probe for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9):kube-controller-manager" succeeded
Nov 15 02:01:17 af867b kubelet[27751]: I1115 02:01:17.681635 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10251, Path: /healthz
Nov 15 02:01:17 af867b kubelet[27751]: I1115 02:01:17.681667 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:01:17 af867b kubelet[27751]: I1115 02:01:17.682906 27751 http.go:96] Probe succeeded for http://127.0.0.1:10251/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:01:17 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc421c7b960 2 [] true false map[] 0xc420afd600 <nil>}
Nov 15 02:01:17 af867b kubelet[27751]: I1115 02:01:17.682960 27751 prober.go:113] Liveness probe for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373):kube-scheduler" succeeded
Nov 15 02:01:18 af867b kubelet[27751]: I1115 02:01:18.523481 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:01:20 af867b kubelet[27751]: I1115 02:01:20.523458 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:01:20 af867b kubelet[27751]: I1115 02:01:20.866415 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.508856 27751 eviction_manager.go:221] eviction manager: synchronize housekeeping
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.554574 27751 helpers.go:871] eviction manager: observations: signal=nodefs.available, available: 7401080Ki, capacity: 10198Mi, time: 2017-11-15 02:01:19.237424468 +0000 UTC
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.554620 27751 helpers.go:871] eviction manager: observations: signal=nodefs.inodesFree, available: 10384695, capacity: 10208Ki, time: 2017-11-15 02:01:19.237424468 +0000 UTC
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.554632 27751 helpers.go:871] eviction manager: observations: signal=imagefs.available, available: 40868352Ki, capacity: 45Gi, time: 2017-11-15 02:01:19.237424468 +0000 UTC
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.554643 27751 helpers.go:873] eviction manager: observations: signal=allocatableMemory.available, available: 6976916Ki, capacity: 7393360Ki
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.554651 27751 helpers.go:873] eviction manager: observations: signal=allocatableNodeFs.available, available: 9624040228, capacity: 10198Mi
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.554658 27751 helpers.go:871] eviction manager: observations: signal=memory.available, available: 6532960Ki, capacity: 7393360Ki, time: 2017-11-15 02:01:19.237424468 +0000 UTC
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.554680 27751 eviction_manager.go:325] eviction manager: no resources are starved
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.609382 27751 config.go:282] Setting pods for source api
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.610592 27751 config.go:404] Receiving a new pod "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.610905 27751 kubelet.go:1837] SyncLoop (ADD, "api"): "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.611102 27751 kubelet_pods.go:1284] Generating status for "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.611478 27751 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.613917 27751 factory.go:116] Factory "docker" was unable to handle container "/kubepods/burstable/pod97270c63-c9a8-11e7-89f4-c6b053eac242"
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.613940 27751 factory.go:105] Error trying to work out if we can handle /kubepods/burstable/pod97270c63-c9a8-11e7-89f4-c6b053eac242: /kubepods/burstable/pod97270c63-c9a8-11e7-89f4-c6b053eac242 not handled by systemd handler
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.613948 27751 factory.go:116] Factory "systemd" was unable to handle container "/kubepods/burstable/pod97270c63-c9a8-11e7-89f4-c6b053eac242"
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.613956 27751 factory.go:112] Using factory "raw" for container "/kubepods/burstable/pod97270c63-c9a8-11e7-89f4-c6b053eac242"
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.615190 27751 manager.go:932] Added container: "/kubepods/burstable/pod97270c63-c9a8-11e7-89f4-c6b053eac242" (aliases: [], namespace: "")
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.615318 27751 handler.go:325] Added event &{/kubepods/burstable/pod97270c63-c9a8-11e7-89f4-c6b053eac242 2017-11-15 02:01:21.613194711 +0000 UTC containerCreation {<nil>}}
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.615355 27751 container.go:409] Start housekeeping for container "/kubepods/burstable/pod97270c63-c9a8-11e7-89f4-c6b053eac242"
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.618187 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.624415 27751 status_manager.go:451] Status for pod "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)" updated successfully: (1, {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:01:21 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:01:21 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kubedns dnsmasq sidecar]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:01:21 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP: StartTime:2017-11-15 02:01:21 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:dnsmasq State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5 ImageID: ContainerID:} {Name:kubedns State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5 ImageID: ContainerID:} {Name:sidecar State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5 ImageID: ContainerID:}] QOSClass:Burstable})
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.624667 27751 config.go:282] Setting pods for source api
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.627670 27751 kubelet.go:1850] SyncLoop (RECONCILE, "api"): "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.786392 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-dns-config" (UniqueName: "kubernetes.io/configmap/97270c63-c9a8-11e7-89f4-c6b053eac242-kube-dns-config") pod "kube-dns-545bc4bfd4-zvfqd" (UID: "97270c63-c9a8-11e7-89f4-c6b053eac242")
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.786445 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-dns-token-987zv" (UniqueName: "kubernetes.io/secret/97270c63-c9a8-11e7-89f4-c6b053eac242-kube-dns-token-987zv") pod "kube-dns-545bc4bfd4-zvfqd" (UID: "97270c63-c9a8-11e7-89f4-c6b053eac242")
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.886781 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "kube-dns-config" (UniqueName: "kubernetes.io/configmap/97270c63-c9a8-11e7-89f4-c6b053eac242-kube-dns-config") pod "kube-dns-545bc4bfd4-zvfqd" (UID: "97270c63-c9a8-11e7-89f4-c6b053eac242")
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.886852 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "kube-dns-token-987zv" (UniqueName: "kubernetes.io/secret/97270c63-c9a8-11e7-89f4-c6b053eac242-kube-dns-token-987zv") pod "kube-dns-545bc4bfd4-zvfqd" (UID: "97270c63-c9a8-11e7-89f4-c6b053eac242")
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.886906 27751 secret.go:186] Setting up volume kube-dns-token-987zv for pod 97270c63-c9a8-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/97270c63-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/kube-dns-token-987zv
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.887109 27751 empty_dir.go:264] pod 97270c63-c9a8-11e7-89f4-c6b053eac242: mounting tmpfs for volume wrapped_kube-dns-token-987zv
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.887129 27751 mount_linux.go:135] Mounting cmd (systemd-run) with arguments ([--description=Kubernetes transient mount for /var/lib/kubelet/pods/97270c63-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/kube-dns-token-987zv --scope -- mount -t tmpfs tmpfs /var/lib/kubelet/pods/97270c63-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/kube-dns-token-987zv])
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.889747 27751 configmap.go:187] Setting up volume kube-dns-config for pod 97270c63-c9a8-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/97270c63-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~configmap/kube-dns-config
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.895372 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/run-30165.scope"
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.895389 27751 factory.go:105] Error trying to work out if we can handle /system.slice/run-30165.scope: /system.slice/run-30165.scope not handled by systemd handler
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.895396 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/run-30165.scope"
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.895403 27751 factory.go:112] Using factory "raw" for container "/system.slice/run-30165.scope"
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.895560 27751 manager.go:932] Added container: "/system.slice/run-30165.scope" (aliases: [], namespace: "")
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.895667 27751 handler.go:325] Added event &{/system.slice/run-30165.scope 2017-11-15 02:01:21.893194711 +0000 UTC containerCreation {<nil>}}
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.895700 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-9ba72245089e9b36e901a816a7446b004869c57dbf880db7daeb3edac0f81e51.mount"
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.895726 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-9ba72245089e9b36e901a816a7446b004869c57dbf880db7daeb3edac0f81e51.mount", but ignoring.
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.895737 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-9ba72245089e9b36e901a816a7446b004869c57dbf880db7daeb3edac0f81e51.mount"
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.895759 27751 container.go:409] Start housekeeping for container "/system.slice/run-30165.scope"
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.901861 27751 configmap.go:218] Received configMap kube-system/kube-dns containing (0) pieces of data, 0 total bytes
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.901928 27751 atomic_writer.go:142] pod kube-system/kube-dns-545bc4bfd4-zvfqd volume kube-dns-config: no update required for target directory /var/lib/kubelet/pods/97270c63-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~configmap/kube-dns-config
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.901965 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "kube-dns-config" (UniqueName: "kubernetes.io/configmap/97270c63-c9a8-11e7-89f4-c6b053eac242-kube-dns-config") pod "kube-dns-545bc4bfd4-zvfqd" (UID: "97270c63-c9a8-11e7-89f4-c6b053eac242")
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.902243 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-545bc4bfd4-zvfqd", UID:"97270c63-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "kube-dns-config"
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.902449 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-be14b0552100f929052c8e94e1b7cf587c4218e321d811314e58f2cf312f1c0f-shm.mount"
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.902463 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-be14b0552100f929052c8e94e1b7cf587c4218e321d811314e58f2cf312f1c0f-shm.mount", but ignoring.
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.902480 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-be14b0552100f929052c8e94e1b7cf587c4218e321d811314e58f2cf312f1c0f-shm.mount"
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.902504 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-kubelet-pods-b77b0858\\x2dc9a8\\x2d11e7\\x2d89f4\\x2dc6b053eac242-volumes-kubernetes.io\\x7esecret-weave\\x2dnet\\x2dtoken\\x2drn6j7.mount"
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.902516 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-kubelet-pods-b77b0858\\x2dc9a8\\x2d11e7\\x2d89f4\\x2dc6b053eac242-volumes-kubernetes.io\\x7esecret-weave\\x2dnet\\x2dtoken\\x2drn6j7.mount", but ignoring.
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.902531 27751 manager.go:901] ignoring container "/system.slice/var-lib-kubelet-pods-b77b0858\\x2dc9a8\\x2d11e7\\x2d89f4\\x2dc6b053eac242-volumes-kubernetes.io\\x7esecret-weave\\x2dnet\\x2dtoken\\x2drn6j7.mount"
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.902543 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-ffde707e35c082af5481df3bb46e12f0eaf2bf20d3f28b499b6739ec31319af8.mount"
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.902551 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-ffde707e35c082af5481df3bb46e12f0eaf2bf20d3f28b499b6739ec31319af8.mount", but ignoring.
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.902560 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-ffde707e35c082af5481df3bb46e12f0eaf2bf20d3f28b499b6739ec31319af8.mount"
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.903278 27751 manager.go:989] Destroyed container: "/system.slice/run-30165.scope" (aliases: [], namespace: "")
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.903294 27751 handler.go:325] Added event &{/system.slice/run-30165.scope 2017-11-15 02:01:21.903289511 +0000 UTC containerDeletion {<nil>}}
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.906816 27751 secret.go:217] Received secret kube-system/kube-dns-token-987zv containing (3) pieces of data, 1896 total bytes
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.906873 27751 atomic_writer.go:145] pod kube-system/kube-dns-545bc4bfd4-zvfqd volume kube-dns-token-987zv: write required for target directory /var/lib/kubelet/pods/97270c63-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/kube-dns-token-987zv
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.906956 27751 atomic_writer.go:160] pod kube-system/kube-dns-545bc4bfd4-zvfqd volume kube-dns-token-987zv: performed write of new data to ts data directory: /var/lib/kubelet/pods/97270c63-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/kube-dns-token-987zv/..119811_15_11_02_01_21.803270174
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.907035 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "kube-dns-token-987zv" (UniqueName: "kubernetes.io/secret/97270c63-c9a8-11e7-89f4-c6b053eac242-kube-dns-token-987zv") pod "kube-dns-545bc4bfd4-zvfqd" (UID: "97270c63-c9a8-11e7-89f4-c6b053eac242")
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.907059 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-545bc4bfd4-zvfqd", UID:"97270c63-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "kube-dns-token-987zv"
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.918450 27751 volume_manager.go:366] All volumes are attached and mounted for pod "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.918475 27751 kuberuntime_manager.go:370] No sandbox for pod "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)" can be found. Need to start a new one
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.918488 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:true CreateSandbox:true SandboxID: Attempt:0 NextInitContainerToStart:nil ContainersToStart:[0 1 2] ContainersToKill:map[]} for pod "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.918527 27751 kuberuntime_manager.go:565] SyncPod received new pod "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)", will create a sandbox for it
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.918536 27751 kuberuntime_manager.go:574] Stopping PodSandbox for "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)", will start new one
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.918569 27751 kuberuntime_manager.go:626] Creating sandbox for pod "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.921048 27751 expiration_cache.go:98] Entry version: {key:version obj:0xc420f30500} has expired
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.921810 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/burstable/pod97270c63-c9a8-11e7-89f4-c6b053eac242"
Nov 15 02:01:21 af867b kubelet[27751]: I1115 02:01:21.921828 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/burstable/pod97270c63-c9a8-11e7-89f4-c6b053eac242"
Nov 15 02:01:22 af867b kubelet[27751]: I1115 02:01:22.270358 27751 factory.go:112] Using factory "docker" for container "/kubepods/burstable/pod97270c63-c9a8-11e7-89f4-c6b053eac242/57f84a53f69dda718423e2a72b069d129afa4d226103816e6fa21ee5fab8b82b"
Nov 15 02:01:22 af867b kubelet[27751]: I1115 02:01:22.272386 27751 manager.go:932] Added container: "/kubepods/burstable/pod97270c63-c9a8-11e7-89f4-c6b053eac242/57f84a53f69dda718423e2a72b069d129afa4d226103816e6fa21ee5fab8b82b" (aliases: [k8s_POD_kube-dns-545bc4bfd4-zvfqd_kube-system_97270c63-c9a8-11e7-89f4-c6b053eac242_0 57f84a53f69dda718423e2a72b069d129afa4d226103816e6fa21ee5fab8b82b], namespace: "docker")
Nov 15 02:01:22 af867b kubelet[27751]: I1115 02:01:22.272525 27751 handler.go:325] Added event &{/kubepods/burstable/pod97270c63-c9a8-11e7-89f4-c6b053eac242/57f84a53f69dda718423e2a72b069d129afa4d226103816e6fa21ee5fab8b82b 2017-11-15 02:01:22.197194711 +0000 UTC containerCreation {<nil>}}
Nov 15 02:01:22 af867b kubelet[27751]: I1115 02:01:22.272564 27751 container.go:409] Start housekeeping for container "/kubepods/burstable/pod97270c63-c9a8-11e7-89f4-c6b053eac242/57f84a53f69dda718423e2a72b069d129afa4d226103816e6fa21ee5fab8b82b"
Nov 15 02:01:22 af867b kubelet[27751]: I1115 02:01:22.275968 27751 docker_sandbox.go:691] Will attempt to re-write config file /var/lib/docker/containers/57f84a53f69dda718423e2a72b069d129afa4d226103816e6fa21ee5fab8b82b/resolv.conf with:
Nov 15 02:01:22 af867b kubelet[27751]: [nameserver 10.196.65.209 search opcwlaas.oraclecloud.internal. opcwlaas.oraclecloud.internal.]
Nov 15 02:01:22 af867b kubelet[27751]: I1115 02:01:22.276073 27751 plugins.go:392] Calling network plugin cni to set up pod "kube-dns-545bc4bfd4-zvfqd_kube-system"
Nov 15 02:01:22 af867b kubelet[27751]: I1115 02:01:22.277634 27751 cni.go:326] Got netns path /proc/30231/ns/net
Nov 15 02:01:22 af867b kubelet[27751]: I1115 02:01:22.277645 27751 cni.go:327] Using netns path kube-system
Nov 15 02:01:22 af867b kubelet[27751]: I1115 02:01:22.277803 27751 cni.go:298] About to add CNI network cni-loopback (type=loopback)
Nov 15 02:01:22 af867b kubelet[27751]: I1115 02:01:22.304782 27751 cni.go:326] Got netns path /proc/30231/ns/net
Nov 15 02:01:22 af867b kubelet[27751]: I1115 02:01:22.304796 27751 cni.go:327] Using netns path kube-system
Nov 15 02:01:22 af867b kubelet[27751]: I1115 02:01:22.304935 27751 cni.go:298] About to add CNI network weave (type=weave-net)
Nov 15 02:01:22 af867b kubelet[27751]: I1115 02:01:22.407498 27751 kuberuntime_manager.go:640] Created PodSandbox "57f84a53f69dda718423e2a72b069d129afa4d226103816e6fa21ee5fab8b82b" for pod "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:01:22 af867b kubelet[27751]: I1115 02:01:22.413095 27751 kuberuntime_manager.go:654] Determined the ip "10.32.0.2" for pod "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)" after sandbox changed
Nov 15 02:01:22 af867b kubelet[27751]: I1115 02:01:22.413230 27751 kuberuntime_manager.go:705] Creating container &Container{Name:kubedns,Image:gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5,Command:[],Args:[--domain=cluster.local. --dns-port=10053 --config-dir=/kube-dns-config --v=2],WorkingDir:,Ports:[{dns-local 0 10053 UDP } {dns-tcp-local 0 10053 TCP } {metrics 0 10055 TCP }],Env:[{PROMETHEUS_PORT 10055 nil}],Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},},VolumeMounts:[{kube-dns-config false /kube-dns-config <nil>} {kube-dns-token-987zv true /var/run/secrets/kubernetes.io/serviceaccount <nil>}],LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthcheck/kubedns,Port:10054,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readiness,Port:8081,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[],TerminationMessagePolicy:File,} in pod kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)
Nov 15 02:01:22 af867b kubelet[27751]: I1115 02:01:22.415191 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-545bc4bfd4-zvfqd", UID:"97270c63-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"477", FieldPath:"spec.containers{kubedns}"}): type: 'Normal' reason: 'Pulling' pulling image "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5"
Nov 15 02:01:22 af867b kubelet[27751]: I1115 02:01:22.523488 27751 kubelet.go:1890] SyncLoop (SYNC): 1 pods; kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)
Nov 15 02:01:22 af867b kubelet[27751]: I1115 02:01:22.523560 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:01:22 af867b kubelet[27751]: I1115 02:01:22.524917 27751 kubelet_pods.go:1284] Generating status for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 02:01:22 af867b kubelet[27751]: I1115 02:01:22.525144 27751 status_manager.go:325] Ignoring same status for pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:58 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 01:58:55 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-apiserver State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 01:58:58 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/google_containers/kube-apiserver-amd64:v1.8.3 ImageID:docker-pullable://gcr.io/google_containers/kube-apiserver-amd64@sha256:872e3d4286a8ef4338df59945cb0d64c2622268ceb3e8a2ce7b52243279b02d0 ContainerID:docker://8da2c70a27f08a2f062af80b5708e01ac34ce76b42ab4a6eaa0288e2daf8a043}] QOSClass:Burstable}
Nov 15 02:01:22 af867b kubelet[27751]: I1115 02:01:22.525327 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 02:01:22 af867b kubelet[27751]: I1115 02:01:22.709402 27751 kube_docker_client.go:330] Pulling image "weaveworks/weave-npc:2.0.5": "a2592a033c5d: Downloading [====================================> ] 8.273MB/11.27MB"
Nov 15 02:01:22 af867b kubelet[27751]: I1115 02:01:22.825573 27751 volume_manager.go:366] All volumes are attached and mounted for pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 02:01:22 af867b kubelet[27751]: I1115 02:01:22.825763 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:19b06ca14052c17f92bf03d5316fd7642fe4fee6bf193755575355be3e87a13d Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d)"
Nov 15 02:01:22 af867b kubelet[27751]: I1115 02:01:22.959960 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 2379, Path: /health
Nov 15 02:01:22 af867b kubelet[27751]: I1115 02:01:22.960001 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:01:23 af867b kubelet[27751]: I1115 02:01:23.047567 27751 generic.go:146] GenericPLEG: 97270c63-c9a8-11e7-89f4-c6b053eac242/57f84a53f69dda718423e2a72b069d129afa4d226103816e6fa21ee5fab8b82b: non-existent -> running
Nov 15 02:01:23 af867b kubelet[27751]: I1115 02:01:23.049122 27751 kuberuntime_manager.go:833] getSandboxIDByPodUID got sandbox IDs ["57f84a53f69dda718423e2a72b069d129afa4d226103816e6fa21ee5fab8b82b"] for pod "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:01:23 af867b kubelet[27751]: I1115 02:01:23.055005 27751 generic.go:345] PLEG: Write status for kube-dns-545bc4bfd4-zvfqd/kube-system: &container.PodStatus{ID:"97270c63-c9a8-11e7-89f4-c6b053eac242", Name:"kube-dns-545bc4bfd4-zvfqd", Namespace:"kube-system", IP:"10.32.0.2", ContainerStatuses:[]*container.ContainerStatus{}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc42139c1e0)}} (err: <nil>)
Nov 15 02:01:23 af867b kubelet[27751]: I1115 02:01:23.055065 27751 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)", event: &pleg.PodLifecycleEvent{ID:"97270c63-c9a8-11e7-89f4-c6b053eac242", Type:"ContainerStarted", Data:"57f84a53f69dda718423e2a72b069d129afa4d226103816e6fa21ee5fab8b82b"}
Nov 15 02:01:23 af867b kubelet[27751]: I1115 02:01:23.197471 27751 worker.go:164] Probe target container not found: kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242) - kubedns
Nov 15 02:01:23 af867b kubelet[27751]: I1115 02:01:23.211212 27751 http.go:96] Probe succeeded for http://127.0.0.1:2379/health, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Type:[text/plain; charset=utf-8] Date:[Wed, 15 Nov 2017 02:01:23 GMT] Content-Length:[18]] 0xc420e097a0 18 [] true false map[] 0xc420a31b00 <nil>}
Nov 15 02:01:23 af867b kubelet[27751]: I1115 02:01:23.211263 27751 prober.go:113] Liveness probe for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250):etcd" succeeded
Nov 15 02:01:23 af867b kubelet[27751]: I1115 02:01:23.523446 27751 kubelet.go:1890] SyncLoop (SYNC): 1 pods; kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)
Nov 15 02:01:23 af867b kubelet[27751]: I1115 02:01:23.523541 27751 kubelet_pods.go:1284] Generating status for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 02:01:23 af867b kubelet[27751]: I1115 02:01:23.523762 27751 status_manager.go:325] Ignoring same status for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:58 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 01:58:55 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-scheduler State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 01:58:57 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/google_containers/kube-scheduler-amd64:v1.8.3 ImageID:docker-pullable://gcr.io/google_containers/kube-scheduler-amd64@sha256:c47b2438bbab28d58e8cbf64b37b7f66d26b000f5c3a31626ee829a4be8fb91e ContainerID:docker://413566b22305750f9a9aa46fbe256c11e75293e80f6e0d4afb7ec9e6afcdee05}] QOSClass:Burstable}
Nov 15 02:01:23 af867b kubelet[27751]: I1115 02:01:23.523936 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 02:01:23 af867b kubelet[27751]: I1115 02:01:23.617346 27751 prober.go:160] HTTP-Probe Host: https://127.0.0.1, Port: 6443, Path: /healthz
Nov 15 02:01:23 af867b kubelet[27751]: I1115 02:01:23.617390 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:01:23 af867b kubelet[27751]: I1115 02:01:23.623968 27751 http.go:96] Probe succeeded for https://127.0.0.1:6443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Type:[text/plain; charset=utf-8] Content-Length:[2] Date:[Wed, 15 Nov 2017 02:01:23 GMT]] 0xc420e09b00 2 [] false false map[] 0xc42110ba00 0xc42104ef20}
Nov 15 02:01:23 af867b kubelet[27751]: I1115 02:01:23.624019 27751 prober.go:113] Liveness probe for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d):kube-apiserver" succeeded
Nov 15 02:01:23 af867b kubelet[27751]: I1115 02:01:23.824224 27751 volume_manager.go:366] All volumes are attached and mounted for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 02:01:23 af867b kubelet[27751]: I1115 02:01:23.824389 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:fe18d337ce42bd3a4d2aacb1349be028274681d8073fbe5d6377946e815fa810 Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 02:01:23 af867b kubelet[27751]: I1115 02:01:23.834330 27751 worker.go:164] Probe target container not found: kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242) - dnsmasq
Nov 15 02:01:24 af867b kubelet[27751]: I1115 02:01:24.504443 27751 worker.go:164] Probe target container not found: kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242) - sidecar
Nov 15 02:01:24 af867b kubelet[27751]: I1115 02:01:24.523439 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:01:25 af867b kubelet[27751]: I1115 02:01:25.362915 27751 worker.go:164] Probe target container not found: weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242) - weave
Nov 15 02:01:25 af867b kubelet[27751]: I1115 02:01:25.867814 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
Nov 15 02:01:26 af867b kubelet[27751]: I1115 02:01:26.523447 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:01:27 af867b kubelet[27751]: I1115 02:01:27.506182 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10252, Path: /healthz
Nov 15 02:01:27 af867b kubelet[27751]: I1115 02:01:27.506208 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:01:27 af867b kubelet[27751]: I1115 02:01:27.507658 27751 http.go:96] Probe succeeded for http://127.0.0.1:10252/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:01:27 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc421347440 2 [] true false map[] 0xc420a31700 <nil>}
Nov 15 02:01:27 af867b kubelet[27751]: I1115 02:01:27.507706 27751 prober.go:113] Liveness probe for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9):kube-controller-manager" succeeded
Nov 15 02:01:27 af867b kubelet[27751]: I1115 02:01:27.599386 27751 worker.go:164] Probe target container not found: kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242) - kubedns
Nov 15 02:01:27 af867b kubelet[27751]: I1115 02:01:27.681661 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10251, Path: /healthz
Nov 15 02:01:27 af867b kubelet[27751]: I1115 02:01:27.681697 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:01:27 af867b kubelet[27751]: I1115 02:01:27.682889 27751 http.go:96] Probe succeeded for http://127.0.0.1:10251/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:01:27 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc42099e380 2 [] true false map[] 0xc420d4be00 <nil>}
Nov 15 02:01:27 af867b kubelet[27751]: I1115 02:01:27.682936 27751 prober.go:113] Liveness probe for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373):kube-scheduler" succeeded
Nov 15 02:01:28 af867b kubelet[27751]: I1115 02:01:28.523458 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:01:30 af867b kubelet[27751]: I1115 02:01:30.523951 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:01:30 af867b kubelet[27751]: I1115 02:01:30.869258 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
Nov 15 02:01:31 af867b kubelet[27751]: I1115 02:01:31.554950 27751 eviction_manager.go:221] eviction manager: synchronize housekeeping
Nov 15 02:01:31 af867b kubelet[27751]: I1115 02:01:31.624888 27751 helper.go:148] Missing default interface "eth0" for pod:kube-system_kube-dns-545bc4bfd4-zvfqd
Nov 15 02:01:31 af867b kubelet[27751]: I1115 02:01:31.625012 27751 helpers.go:873] eviction manager: observations: signal=allocatableMemory.available, available: 6970772Ki, capacity: 7393360Ki
Nov 15 02:01:31 af867b kubelet[27751]: I1115 02:01:31.625032 27751 helpers.go:873] eviction manager: observations: signal=allocatableNodeFs.available, available: 9624040228, capacity: 10198Mi
Nov 15 02:01:31 af867b kubelet[27751]: I1115 02:01:31.625040 27751 helpers.go:871] eviction manager: observations: signal=memory.available, available: 6532960Ki, capacity: 7393360Ki, time: 2017-11-15 02:01:19.237424468 +0000 UTC
Nov 15 02:01:31 af867b kubelet[27751]: I1115 02:01:31.625059 27751 helpers.go:871] eviction manager: observations: signal=nodefs.available, available: 7401080Ki, capacity: 10198Mi, time: 2017-11-15 02:01:19.237424468 +0000 UTC
Nov 15 02:01:31 af867b kubelet[27751]: I1115 02:01:31.625070 27751 helpers.go:871] eviction manager: observations: signal=nodefs.inodesFree, available: 10384695, capacity: 10208Ki, time: 2017-11-15 02:01:19.237424468 +0000 UTC
Nov 15 02:01:31 af867b kubelet[27751]: I1115 02:01:31.625079 27751 helpers.go:871] eviction manager: observations: signal=imagefs.available, available: 40868352Ki, capacity: 45Gi, time: 2017-11-15 02:01:19.237424468 +0000 UTC
Nov 15 02:01:31 af867b kubelet[27751]: I1115 02:01:31.625100 27751 eviction_manager.go:325] eviction manager: no resources are starved
Nov 15 02:01:32 af867b kubelet[27751]: I1115 02:01:32.523482 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:01:32 af867b kubelet[27751]: I1115 02:01:32.556200 27751 kube_docker_client.go:333] Stop pulling image "weaveworks/weave-npc:2.0.5": "Status: Downloaded newer image for weaveworks/weave-npc:2.0.5"
Nov 15 02:01:32 af867b kubelet[27751]: I1115 02:01:32.558552 27751 kuberuntime_image.go:46] Pulling image "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5" without credentials
Nov 15 02:01:32 af867b kubelet[27751]: I1115 02:01:32.558672 27751 kuberuntime_container.go:100] Generating ref for container weave-npc: &v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"weave-net-rg7fn", UID:"b77b0858-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"381", FieldPath:"spec.containers{weave-npc}"}
Nov 15 02:01:32 af867b kubelet[27751]: I1115 02:01:32.558739 27751 container_manager_linux.go:634] Calling devicePluginHandler AllocateDevices
Nov 15 02:01:32 af867b kubelet[27751]: I1115 02:01:32.558805 27751 kubelet_pods.go:123] container: kube-system/weave-net-rg7fn/weave-npc podIP: "10.196.65.210" creating hosts mount: true
Nov 15 02:01:32 af867b kubelet[27751]: I1115 02:01:32.559506 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"weave-net-rg7fn", UID:"b77b0858-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"381", FieldPath:"spec.containers{weave-npc}"}): type: 'Normal' reason: 'Pulled' Successfully pulled image "weaveworks/weave-npc:2.0.5"
Nov 15 02:01:32 af867b kubelet[27751]: I1115 02:01:32.566219 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/burstable/podb77b0858-c9a8-11e7-89f4-c6b053eac242"
Nov 15 02:01:32 af867b kubelet[27751]: I1115 02:01:32.821540 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"weave-net-rg7fn", UID:"b77b0858-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"381", FieldPath:"spec.containers{weave-npc}"}): type: 'Normal' reason: 'Created' Created container
Nov 15 02:01:32 af867b kubelet[27751]: I1115 02:01:32.960017 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 2379, Path: /health
Nov 15 02:01:32 af867b kubelet[27751]: I1115 02:01:32.960058 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:01:32 af867b kubelet[27751]: I1115 02:01:32.981077 27751 factory.go:112] Using factory "docker" for container "/kubepods/burstable/podb77b0858-c9a8-11e7-89f4-c6b053eac242/43fe4d3eda8b4078f76541a5e9dca827819ff4e80e9a043f89f50752401bf06c"
Nov 15 02:01:32 af867b kubelet[27751]: I1115 02:01:32.984077 27751 manager.go:932] Added container: "/kubepods/burstable/podb77b0858-c9a8-11e7-89f4-c6b053eac242/43fe4d3eda8b4078f76541a5e9dca827819ff4e80e9a043f89f50752401bf06c" (aliases: [k8s_weave-npc_weave-net-rg7fn_kube-system_b77b0858-c9a8-11e7-89f4-c6b053eac242_0 43fe4d3eda8b4078f76541a5e9dca827819ff4e80e9a043f89f50752401bf06c], namespace: "docker")
Nov 15 02:01:32 af867b kubelet[27751]: I1115 02:01:32.984254 27751 handler.go:325] Added event &{/kubepods/burstable/podb77b0858-c9a8-11e7-89f4-c6b053eac242/43fe4d3eda8b4078f76541a5e9dca827819ff4e80e9a043f89f50752401bf06c 2017-11-15 02:01:32.896194711 +0000 UTC containerCreation {<nil>}}
Nov 15 02:01:32 af867b kubelet[27751]: I1115 02:01:32.984304 27751 container.go:409] Start housekeeping for container "/kubepods/burstable/podb77b0858-c9a8-11e7-89f4-c6b053eac242/43fe4d3eda8b4078f76541a5e9dca827819ff4e80e9a043f89f50752401bf06c"
Nov 15 02:01:32 af867b kubelet[27751]: I1115 02:01:32.988919 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"weave-net-rg7fn", UID:"b77b0858-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"381", FieldPath:"spec.containers{weave-npc}"}): type: 'Normal' reason: 'Started' Started container
Nov 15 02:01:33 af867b kubelet[27751]: I1115 02:01:33.100121 27751 generic.go:146] GenericPLEG: b77b0858-c9a8-11e7-89f4-c6b053eac242/43fe4d3eda8b4078f76541a5e9dca827819ff4e80e9a043f89f50752401bf06c: non-existent -> running
Nov 15 02:01:33 af867b kubelet[27751]: I1115 02:01:33.106441 27751 kuberuntime_manager.go:833] getSandboxIDByPodUID got sandbox IDs ["be14b0552100f929052c8e94e1b7cf587c4218e321d811314e58f2cf312f1c0f"] for pod "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:01:33 af867b kubelet[27751]: I1115 02:01:33.113056 27751 generic.go:345] PLEG: Write status for weave-net-rg7fn/kube-system: &container.PodStatus{ID:"b77b0858-c9a8-11e7-89f4-c6b053eac242", Name:"weave-net-rg7fn", Namespace:"kube-system", IP:"", ContainerStatuses:[]*container.ContainerStatus{(*container.ContainerStatus)(0xc421cc70a0), (*container.ContainerStatus)(0xc421cc7180)}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc4219240f0)}} (err: <nil>)
Nov 15 02:01:33 af867b kubelet[27751]: I1115 02:01:33.113129 27751 kubelet.go:1871] SyncLoop (PLEG): "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)", event: &pleg.PodLifecycleEvent{ID:"b77b0858-c9a8-11e7-89f4-c6b053eac242", Type:"ContainerStarted", Data:"43fe4d3eda8b4078f76541a5e9dca827819ff4e80e9a043f89f50752401bf06c"}
Nov 15 02:01:33 af867b kubelet[27751]: I1115 02:01:33.113176 27751 kubelet_pods.go:1284] Generating status for "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:01:33 af867b kubelet[27751]: I1115 02:01:33.113434 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:01:33 af867b kubelet[27751]: I1115 02:01:33.134602 27751 secret.go:186] Setting up volume weave-net-token-rn6j7 for pod b77b0858-c9a8-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/b77b0858-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/weave-net-token-rn6j7
Nov 15 02:01:33 af867b kubelet[27751]: I1115 02:01:33.135151 27751 status_manager.go:451] Status for pod "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)" updated successfully: (2, {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:00:12 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:01:33 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 02:00:12 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:weave State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 02:01:01 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:weaveworks/weave-kube:2.0.5 ImageID:docker-pullable://weaveworks/weave-kube@sha256:1af289ad3cf6ddaa7bb6cc31ad32f64adf2728635c971e4c54399a291c7aeb96 ContainerID:docker://f4fa9d4e3be52b17cedfe3b19a153e4584737c95ada75d64ea13225dde740f40} {Name:weave-npc State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 02:01:32 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:weaveworks/weave-npc:2.0.5 ImageID:docker-pullable://weaveworks/weave-npc@sha256:da936be1a2bd3f1c05cc80ab21e3282d15dd7d95223479fd563b6d1ae8c54ef3 ContainerID:docker://43fe4d3eda8b4078f76541a5e9dca827819ff4e80e9a043f89f50752401bf06c}] QOSClass:Burstable})
Nov 15 02:01:33 af867b kubelet[27751]: I1115 02:01:33.135385 27751 config.go:282] Setting pods for source api
Nov 15 02:01:33 af867b kubelet[27751]: I1115 02:01:33.145071 27751 kubelet.go:1850] SyncLoop (RECONCILE, "api"): "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:01:33 af867b kubelet[27751]: I1115 02:01:33.146091 27751 secret.go:217] Received secret kube-system/weave-net-token-rn6j7 containing (3) pieces of data, 1900 total bytes
Nov 15 02:01:33 af867b kubelet[27751]: I1115 02:01:33.147900 27751 atomic_writer.go:142] pod kube-system/weave-net-rg7fn volume weave-net-token-rn6j7: no update required for target directory /var/lib/kubelet/pods/b77b0858-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/weave-net-token-rn6j7
Nov 15 02:01:33 af867b kubelet[27751]: I1115 02:01:33.197614 27751 worker.go:164] Probe target container not found: kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242) - kubedns
Nov 15 02:01:33 af867b kubelet[27751]: I1115 02:01:33.218183 27751 http.go:96] Probe succeeded for http://127.0.0.1:2379/health, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Length:[18] Content-Type:[text/plain; charset=utf-8] Date:[Wed, 15 Nov 2017 02:01:33 GMT]] 0xc421dee0a0 18 [] true false map[] 0xc4211a8200 <nil>}
Nov 15 02:01:33 af867b kubelet[27751]: I1115 02:01:33.218248 27751 prober.go:113] Liveness probe for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250):etcd" succeeded
Nov 15 02:01:33 af867b kubelet[27751]: I1115 02:01:33.413807 27751 volume_manager.go:366] All volumes are attached and mounted for pod "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:01:33 af867b kubelet[27751]: I1115 02:01:33.414060 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:be14b0552100f929052c8e94e1b7cf587c4218e321d811314e58f2cf312f1c0f Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:01:33 af867b kubelet[27751]: I1115 02:01:33.617369 27751 prober.go:160] HTTP-Probe Host: https://127.0.0.1, Port: 6443, Path: /healthz
Nov 15 02:01:33 af867b kubelet[27751]: I1115 02:01:33.617424 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:01:33 af867b kubelet[27751]: I1115 02:01:33.624691 27751 http.go:96] Probe succeeded for https://127.0.0.1:6443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Type:[text/plain; charset=utf-8] Content-Length:[2] Date:[Wed, 15 Nov 2017 02:01:33 GMT]] 0xc421def160 2 [] false false map[] 0xc42110a100 0xc42196a580}
Nov 15 02:01:33 af867b kubelet[27751]: I1115 02:01:33.624767 27751 prober.go:113] Liveness probe for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d):kube-apiserver" succeeded
Nov 15 02:01:33 af867b kubelet[27751]: I1115 02:01:33.834499 27751 worker.go:164] Probe target container not found: kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242) - dnsmasq
Nov 15 02:01:34 af867b kubelet[27751]: I1115 02:01:34.117271 27751 kubelet_pods.go:1284] Generating status for "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:01:34 af867b kubelet[27751]: I1115 02:01:34.117555 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:01:34 af867b kubelet[27751]: I1115 02:01:34.131983 27751 status_manager.go:451] Status for pod "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)" updated successfully: (3, {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:00:12 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:01:33 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:01:33 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 02:00:12 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:weave State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 02:01:01 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:weaveworks/weave-kube:2.0.5 ImageID:docker-pullable://weaveworks/weave-kube@sha256:1af289ad3cf6ddaa7bb6cc31ad32f64adf2728635c971e4c54399a291c7aeb96 ContainerID:docker://f4fa9d4e3be52b17cedfe3b19a153e4584737c95ada75d64ea13225dde740f40} {Name:weave-npc State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 02:01:32 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:weaveworks/weave-npc:2.0.5 ImageID:docker-pullable://weaveworks/weave-npc@sha256:da936be1a2bd3f1c05cc80ab21e3282d15dd7d95223479fd563b6d1ae8c54ef3 ContainerID:docker://43fe4d3eda8b4078f76541a5e9dca827819ff4e80e9a043f89f50752401bf06c}] QOSClass:Burstable})
Nov 15 02:01:34 af867b kubelet[27751]: I1115 02:01:34.132296 27751 config.go:282] Setting pods for source api
Nov 15 02:01:34 af867b kubelet[27751]: I1115 02:01:34.135423 27751 kubelet.go:1850] SyncLoop (RECONCILE, "api"): "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:01:34 af867b kubelet[27751]: I1115 02:01:34.141676 27751 secret.go:186] Setting up volume weave-net-token-rn6j7 for pod b77b0858-c9a8-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/b77b0858-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/weave-net-token-rn6j7
Nov 15 02:01:34 af867b kubelet[27751]: I1115 02:01:34.155481 27751 secret.go:217] Received secret kube-system/weave-net-token-rn6j7 containing (3) pieces of data, 1900 total bytes
Nov 15 02:01:34 af867b kubelet[27751]: I1115 02:01:34.155646 27751 atomic_writer.go:142] pod kube-system/weave-net-rg7fn volume weave-net-token-rn6j7: no update required for target directory /var/lib/kubelet/pods/b77b0858-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/weave-net-token-rn6j7
Nov 15 02:01:34 af867b kubelet[27751]: I1115 02:01:34.417869 27751 volume_manager.go:366] All volumes are attached and mounted for pod "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:01:34 af867b kubelet[27751]: I1115 02:01:34.418105 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:be14b0552100f929052c8e94e1b7cf587c4218e321d811314e58f2cf312f1c0f Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:01:34 af867b kubelet[27751]: I1115 02:01:34.504658 27751 worker.go:164] Probe target container not found: kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242) - sidecar
Nov 15 02:01:34 af867b kubelet[27751]: I1115 02:01:34.523520 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:01:35 af867b kubelet[27751]: I1115 02:01:35.362724 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 6784, Path: /status
Nov 15 02:01:35 af867b kubelet[27751]: I1115 02:01:35.362793 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:01:35 af867b kubelet[27751]: I1115 02:01:35.366525 27751 http.go:96] Probe succeeded for http://127.0.0.1:6784/status, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:01:35 GMT] Content-Length:[445] Content-Type:[text/plain; charset=utf-8]] 0xc420d043e0 445 [] true false map[] 0xc420afd500 <nil>}
Nov 15 02:01:35 af867b kubelet[27751]: I1115 02:01:35.366585 27751 prober.go:113] Liveness probe for "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242):weave" succeeded
Nov 15 02:01:35 af867b kubelet[27751]: I1115 02:01:35.870791 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
Nov 15 02:01:36 af867b kubelet[27751]: I1115 02:01:36.523464 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:01:37 af867b kubelet[27751]: I1115 02:01:37.506131 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10252, Path: /healthz
Nov 15 02:01:37 af867b kubelet[27751]: I1115 02:01:37.506171 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:01:37 af867b kubelet[27751]: I1115 02:01:37.508005 27751 http.go:96] Probe succeeded for http://127.0.0.1:10252/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:01:37 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc4211457a0 2 [] true false map[] 0xc420ee2c00 <nil>}
Nov 15 02:01:37 af867b kubelet[27751]: I1115 02:01:37.508058 27751 prober.go:113] Liveness probe for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9):kube-controller-manager" succeeded
Nov 15 02:01:37 af867b kubelet[27751]: I1115 02:01:37.523453 27751 kubelet.go:1890] SyncLoop (SYNC): 1 pods; etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)
Nov 15 02:01:37 af867b kubelet[27751]: I1115 02:01:37.523521 27751 kubelet_pods.go:1284] Generating status for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 02:01:37 af867b kubelet[27751]: I1115 02:01:37.523698 27751 status_manager.go:325] Ignoring same status for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:59 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 01:58:55 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:etcd State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 01:58:58 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/google_containers/etcd-amd64:3.0.17 ImageID:docker-pullable://gcr.io/google_containers/etcd-amd64@sha256:d83d3545e06fb035db8512e33bd44afb55dea007a3abd7b17742d3ac6d235940 ContainerID:docker://ab147bb1b65b2001333417bb7654896e6aadb25ce71a8c48c94ae802a2e0197f}] QOSClass:BestEffort}
Nov 15 02:01:37 af867b kubelet[27751]: I1115 02:01:37.523909 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 02:01:37 af867b kubelet[27751]: I1115 02:01:37.599553 27751 worker.go:164] Probe target container not found: kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242) - kubedns
Nov 15 02:01:37 af867b kubelet[27751]: I1115 02:01:37.681769 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10251, Path: /healthz
Nov 15 02:01:37 af867b kubelet[27751]: I1115 02:01:37.681802 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:01:37 af867b kubelet[27751]: I1115 02:01:37.686121 27751 http.go:96] Probe succeeded for http://127.0.0.1:10251/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:01:37 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc420bb1500 2 [] true false map[] 0xc420ee3700 <nil>}
Nov 15 02:01:37 af867b kubelet[27751]: I1115 02:01:37.686171 27751 prober.go:113] Liveness probe for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373):kube-scheduler" succeeded
Nov 15 02:01:37 af867b kubelet[27751]: I1115 02:01:37.824154 27751 volume_manager.go:366] All volumes are attached and mounted for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 02:01:37 af867b kubelet[27751]: I1115 02:01:37.824281 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:d5076ad0e9fb270d1b8c4ff7cdadbf32d6b1e30dc42ae24dfbc5cb01bb5aa934 Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 02:01:38 af867b kubelet[27751]: I1115 02:01:38.523526 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:01:40 af867b kubelet[27751]: I1115 02:01:40.523904 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:01:40 af867b kubelet[27751]: I1115 02:01:40.872299 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
Nov 15 02:01:41 af867b kubelet[27751]: I1115 02:01:41.625267 27751 eviction_manager.go:221] eviction manager: synchronize housekeeping
Nov 15 02:01:41 af867b kubelet[27751]: I1115 02:01:41.683527 27751 helpers.go:873] eviction manager: observations: signal=allocatableNodeFs.available, available: 9624040228, capacity: 10198Mi
Nov 15 02:01:41 af867b kubelet[27751]: I1115 02:01:41.683572 27751 helpers.go:871] eviction manager: observations: signal=memory.available, available: 6485644Ki, capacity: 7393360Ki, time: 2017-11-15 02:01:34.10605089 +0000 UTC
Nov 15 02:01:41 af867b kubelet[27751]: I1115 02:01:41.683593 27751 helpers.go:871] eviction manager: observations: signal=nodefs.available, available: 7408308Ki, capacity: 10198Mi, time: 2017-11-15 02:01:34.10605089 +0000 UTC
Nov 15 02:01:41 af867b kubelet[27751]: I1115 02:01:41.683605 27751 helpers.go:871] eviction manager: observations: signal=nodefs.inodesFree, available: 10384627, capacity: 10208Ki, time: 2017-11-15 02:01:34.10605089 +0000 UTC
Nov 15 02:01:41 af867b kubelet[27751]: I1115 02:01:41.683617 27751 helpers.go:871] eviction manager: observations: signal=imagefs.available, available: 39839Mi, capacity: 45Gi, time: 2017-11-15 02:01:34.10605089 +0000 UTC
Nov 15 02:01:41 af867b kubelet[27751]: I1115 02:01:41.683629 27751 helpers.go:873] eviction manager: observations: signal=allocatableMemory.available, available: 6967980Ki, capacity: 7393360Ki
Nov 15 02:01:41 af867b kubelet[27751]: I1115 02:01:41.683651 27751 eviction_manager.go:325] eviction manager: no resources are starved
Nov 15 02:01:42 af867b kubelet[27751]: I1115 02:01:42.523476 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:01:42 af867b kubelet[27751]: I1115 02:01:42.960000 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 2379, Path: /health
Nov 15 02:01:42 af867b kubelet[27751]: I1115 02:01:42.960037 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:01:43 af867b kubelet[27751]: I1115 02:01:43.197626 27751 worker.go:164] Probe target container not found: kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242) - kubedns
Nov 15 02:01:43 af867b kubelet[27751]: I1115 02:01:43.212432 27751 http.go:96] Probe succeeded for http://127.0.0.1:2379/health, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:01:43 GMT] Content-Length:[18] Content-Type:[text/plain; charset=utf-8]] 0xc420eb7980 18 [] true false map[] 0xc421178100 <nil>}
Nov 15 02:01:43 af867b kubelet[27751]: I1115 02:01:43.212484 27751 prober.go:113] Liveness probe for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250):etcd" succeeded
Nov 15 02:01:43 af867b kubelet[27751]: I1115 02:01:43.617311 27751 prober.go:160] HTTP-Probe Host: https://127.0.0.1, Port: 6443, Path: /healthz
Nov 15 02:01:43 af867b kubelet[27751]: I1115 02:01:43.617347 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:01:43 af867b kubelet[27751]: I1115 02:01:43.624135 27751 http.go:96] Probe succeeded for https://127.0.0.1:6443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Type:[text/plain; charset=utf-8] Content-Length:[2] Date:[Wed, 15 Nov 2017 02:01:43 GMT]] 0xc4216c55a0 2 [] false false map[] 0xc420c91000 0xc4210d4630}
Nov 15 02:01:43 af867b kubelet[27751]: I1115 02:01:43.624181 27751 prober.go:113] Liveness probe for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d):kube-apiserver" succeeded
Nov 15 02:01:43 af867b kubelet[27751]: I1115 02:01:43.834482 27751 worker.go:164] Probe target container not found: kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242) - dnsmasq
Nov 15 02:01:44 af867b kubelet[27751]: I1115 02:01:44.159387 27751 kube_docker_client.go:330] Pulling image "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5": "a46f95c56b32: Downloading [==================================> ] 7.623MB/11.16MB"
Nov 15 02:01:44 af867b kubelet[27751]: I1115 02:01:44.504602 27751 worker.go:164] Probe target container not found: kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242) - sidecar
Nov 15 02:01:44 af867b kubelet[27751]: I1115 02:01:44.523442 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:01:45 af867b kubelet[27751]: I1115 02:01:45.362708 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 6784, Path: /status
Nov 15 02:01:45 af867b kubelet[27751]: I1115 02:01:45.362744 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:01:45 af867b kubelet[27751]: I1115 02:01:45.364147 27751 http.go:96] Probe succeeded for http://127.0.0.1:6784/status, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:01:45 GMT] Content-Length:[445] Content-Type:[text/plain; charset=utf-8]] 0xc420c99680 445 [] true false map[] 0xc421179300 <nil>}
Nov 15 02:01:45 af867b kubelet[27751]: I1115 02:01:45.364204 27751 prober.go:113] Liveness probe for "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242):weave" succeeded
Nov 15 02:01:45 af867b kubelet[27751]: I1115 02:01:45.873728 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
Nov 15 02:01:46 af867b kubelet[27751]: I1115 02:01:46.523448 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:01:47 af867b kubelet[27751]: I1115 02:01:47.506092 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10252, Path: /healthz
Nov 15 02:01:47 af867b kubelet[27751]: I1115 02:01:47.506128 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:01:47 af867b kubelet[27751]: I1115 02:01:47.507756 27751 http.go:96] Probe succeeded for http://127.0.0.1:10252/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:01:47 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc42147df20 2 [] true false map[] 0xc421179d00 <nil>}
Nov 15 02:01:47 af867b kubelet[27751]: I1115 02:01:47.507796 27751 prober.go:113] Liveness probe for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9):kube-controller-manager" succeeded
Nov 15 02:01:47 af867b kubelet[27751]: I1115 02:01:47.600800 27751 worker.go:164] Probe target container not found: kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242) - kubedns
Nov 15 02:01:47 af867b kubelet[27751]: I1115 02:01:47.681591 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10251, Path: /healthz
Nov 15 02:01:47 af867b kubelet[27751]: I1115 02:01:47.681617 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:01:47 af867b kubelet[27751]: I1115 02:01:47.683280 27751 http.go:96] Probe succeeded for http://127.0.0.1:10251/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:01:47 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc420f4be20 2 [] true false map[] 0xc420afcd00 <nil>}
Nov 15 02:01:47 af867b kubelet[27751]: I1115 02:01:47.683333 27751 prober.go:113] Liveness probe for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373):kube-scheduler" succeeded
Nov 15 02:01:48 af867b kubelet[27751]: I1115 02:01:48.488111 27751 kube_docker_client.go:333] Stop pulling image "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5": "Status: Downloaded newer image for gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5"
Nov 15 02:01:48 af867b kubelet[27751]: I1115 02:01:48.489776 27751 kuberuntime_container.go:100] Generating ref for container kubedns: &v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-545bc4bfd4-zvfqd", UID:"97270c63-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"477", FieldPath:"spec.containers{kubedns}"}
Nov 15 02:01:48 af867b kubelet[27751]: I1115 02:01:48.489814 27751 container_manager_linux.go:634] Calling devicePluginHandler AllocateDevices
Nov 15 02:01:48 af867b kubelet[27751]: I1115 02:01:48.489873 27751 kubelet_pods.go:123] container: kube-system/kube-dns-545bc4bfd4-zvfqd/kubedns podIP: "10.32.0.2" creating hosts mount: true
Nov 15 02:01:48 af867b kubelet[27751]: I1115 02:01:48.490449 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-545bc4bfd4-zvfqd", UID:"97270c63-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"477", FieldPath:"spec.containers{kubedns}"}): type: 'Normal' reason: 'Pulled' Successfully pulled image "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5"
Nov 15 02:01:48 af867b kubelet[27751]: I1115 02:01:48.493581 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/burstable/pod97270c63-c9a8-11e7-89f4-c6b053eac242"
Nov 15 02:01:48 af867b kubelet[27751]: I1115 02:01:48.524520 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:01:48 af867b kubelet[27751]: I1115 02:01:48.661615 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-545bc4bfd4-zvfqd", UID:"97270c63-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"477", FieldPath:"spec.containers{kubedns}"}): type: 'Normal' reason: 'Created' Created container
Nov 15 02:01:48 af867b kubelet[27751]: I1115 02:01:48.783830 27751 factory.go:112] Using factory "docker" for container "/kubepods/burstable/pod97270c63-c9a8-11e7-89f4-c6b053eac242/e430c4a0a025c95f870505c4ec608f56ad8cd6517f0e425139fb82fb5bfd6211"
Nov 15 02:01:48 af867b kubelet[27751]: I1115 02:01:48.786161 27751 kuberuntime_manager.go:705] Creating container &Container{Name:dnsmasq,Image:gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5,Command:[],Args:[-v=2 -logtostderr -configDir=/etc/k8s/dns/dnsmasq-nanny -restartDnsmasq=true -- -k --cache-size=1000 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053],WorkingDir:,Ports:[{dns 0 53 UDP } {dns-tcp 0 53 TCP }],Env:[],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{150 -3} {<nil>} 150m DecimalSI},memory: {{20971520 0} {<nil>} 20Mi BinarySI},},},VolumeMounts:[{kube-dns-config false /etc/k8s/dns/dnsmasq-nanny <nil>} {kube-dns-token-987zv true /var/run/secrets/kubernetes.io/serviceaccount <nil>}],LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthcheck/dnsmasq,Port:10054,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[],TerminationMessagePolicy:File,} in pod kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)
Nov 15 02:01:48 af867b kubelet[27751]: I1115 02:01:48.786608 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-545bc4bfd4-zvfqd", UID:"97270c63-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"477", FieldPath:"spec.containers{kubedns}"}): type: 'Normal' reason: 'Started' Started container
Nov 15 02:01:48 af867b kubelet[27751]: I1115 02:01:48.789551 27751 manager.go:932] Added container: "/kubepods/burstable/pod97270c63-c9a8-11e7-89f4-c6b053eac242/e430c4a0a025c95f870505c4ec608f56ad8cd6517f0e425139fb82fb5bfd6211" (aliases: [k8s_kubedns_kube-dns-545bc4bfd4-zvfqd_kube-system_97270c63-c9a8-11e7-89f4-c6b053eac242_0 e430c4a0a025c95f870505c4ec608f56ad8cd6517f0e425139fb82fb5bfd6211], namespace: "docker")
Nov 15 02:01:48 af867b kubelet[27751]: I1115 02:01:48.789724 27751 handler.go:325] Added event &{/kubepods/burstable/pod97270c63-c9a8-11e7-89f4-c6b053eac242/e430c4a0a025c95f870505c4ec608f56ad8cd6517f0e425139fb82fb5bfd6211 2017-11-15 02:01:48.723194711 +0000 UTC containerCreation {<nil>}}
Nov 15 02:01:48 af867b kubelet[27751]: I1115 02:01:48.789763 27751 container.go:409] Start housekeeping for container "/kubepods/burstable/pod97270c63-c9a8-11e7-89f4-c6b053eac242/e430c4a0a025c95f870505c4ec608f56ad8cd6517f0e425139fb82fb5bfd6211"
Nov 15 02:01:48 af867b kubelet[27751]: I1115 02:01:48.799920 27751 kuberuntime_image.go:46] Pulling image "gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5" without credentials
Nov 15 02:01:48 af867b kubelet[27751]: I1115 02:01:48.800007 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-545bc4bfd4-zvfqd", UID:"97270c63-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"477", FieldPath:"spec.containers{dnsmasq}"}): type: 'Normal' reason: 'Pulling' pulling image "gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5"
Nov 15 02:01:49 af867b kubelet[27751]: I1115 02:01:49.185023 27751 generic.go:146] GenericPLEG: 97270c63-c9a8-11e7-89f4-c6b053eac242/e430c4a0a025c95f870505c4ec608f56ad8cd6517f0e425139fb82fb5bfd6211: non-existent -> running
Nov 15 02:01:49 af867b kubelet[27751]: I1115 02:01:49.186274 27751 kuberuntime_manager.go:833] getSandboxIDByPodUID got sandbox IDs ["57f84a53f69dda718423e2a72b069d129afa4d226103816e6fa21ee5fab8b82b"] for pod "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:01:49 af867b kubelet[27751]: I1115 02:01:49.197224 27751 generic.go:345] PLEG: Write status for kube-dns-545bc4bfd4-zvfqd/kube-system: &container.PodStatus{ID:"97270c63-c9a8-11e7-89f4-c6b053eac242", Name:"kube-dns-545bc4bfd4-zvfqd", Namespace:"kube-system", IP:"10.32.0.2", ContainerStatuses:[]*container.ContainerStatus{(*container.ContainerStatus)(0xc4217d6a80)}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc421a515e0)}} (err: <nil>)
Nov 15 02:01:49 af867b kubelet[27751]: I1115 02:01:49.197288 27751 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)", event: &pleg.PodLifecycleEvent{ID:"97270c63-c9a8-11e7-89f4-c6b053eac242", Type:"ContainerStarted", Data:"e430c4a0a025c95f870505c4ec608f56ad8cd6517f0e425139fb82fb5bfd6211"}
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.523854 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.531924 27751 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.562647 27751 kubelet.go:1222] Container garbage collection succeeded
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827105 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/run-docker-netns-default.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827142 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/run-docker-netns-default.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827152 27751 manager.go:901] ignoring container "/system.slice/run-docker-netns-default.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827159 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827166 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827173 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827179 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-faffc5ce0cf0da39c68377dff0a611eb436ee74dfce11692aecd1e723311ef80.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827189 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-faffc5ce0cf0da39c68377dff0a611eb436ee74dfce11692aecd1e723311ef80.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827198 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-faffc5ce0cf0da39c68377dff0a611eb436ee74dfce11692aecd1e723311ef80.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827205 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/dev-hugepages.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827211 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/dev-hugepages.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827218 27751 manager.go:901] ignoring container "/system.slice/dev-hugepages.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827224 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-fe18d337ce42bd3a4d2aacb1349be028274681d8073fbe5d6377946e815fa810-shm.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827231 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-fe18d337ce42bd3a4d2aacb1349be028274681d8073fbe5d6377946e815fa810-shm.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827239 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-fe18d337ce42bd3a4d2aacb1349be028274681d8073fbe5d6377946e815fa810-shm.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827247 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-9d97c593c8a0b1ef2442048619ba4466fad52c408e686c9285a48450f3a6669f.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827254 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-9d97c593c8a0b1ef2442048619ba4466fad52c408e686c9285a48450f3a6669f.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827263 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-9d97c593c8a0b1ef2442048619ba4466fad52c408e686c9285a48450f3a6669f.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827270 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-82408ec054eef4c0bce3519663bc7e5cbaff7a4fea1d8b7490fa3cc5a74bac60.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827277 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-82408ec054eef4c0bce3519663bc7e5cbaff7a4fea1d8b7490fa3cc5a74bac60.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827285 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-82408ec054eef4c0bce3519663bc7e5cbaff7a4fea1d8b7490fa3cc5a74bac60.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827292 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/proc-xen.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827297 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/proc-xen.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827304 27751 manager.go:901] ignoring container "/system.slice/proc-xen.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827309 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-439f518d46ec0d596518b7c17c503c52a85dd4da21d8a56ec174d5b6b3b98e03-shm.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827316 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-439f518d46ec0d596518b7c17c503c52a85dd4da21d8a56ec174d5b6b3b98e03-shm.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827325 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-439f518d46ec0d596518b7c17c503c52a85dd4da21d8a56ec174d5b6b3b98e03-shm.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827332 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-ffde707e35c082af5481df3bb46e12f0eaf2bf20d3f28b499b6739ec31319af8.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827339 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-ffde707e35c082af5481df3bb46e12f0eaf2bf20d3f28b499b6739ec31319af8.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827347 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-ffde707e35c082af5481df3bb46e12f0eaf2bf20d3f28b499b6739ec31319af8.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827354 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-d5076ad0e9fb270d1b8c4ff7cdadbf32d6b1e30dc42ae24dfbc5cb01bb5aa934-shm.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827361 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-d5076ad0e9fb270d1b8c4ff7cdadbf32d6b1e30dc42ae24dfbc5cb01bb5aa934-shm.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827369 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-d5076ad0e9fb270d1b8c4ff7cdadbf32d6b1e30dc42ae24dfbc5cb01bb5aa934-shm.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827375 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-19b06ca14052c17f92bf03d5316fd7642fe4fee6bf193755575355be3e87a13d-shm.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827382 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-19b06ca14052c17f92bf03d5316fd7642fe4fee6bf193755575355be3e87a13d-shm.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827391 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-19b06ca14052c17f92bf03d5316fd7642fe4fee6bf193755575355be3e87a13d-shm.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827397 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-8c430fd9b74591760bf92ca717db0989293743473d027f75b1dfded4d0661504.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827404 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-8c430fd9b74591760bf92ca717db0989293743473d027f75b1dfded4d0661504.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827413 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-8c430fd9b74591760bf92ca717db0989293743473d027f75b1dfded4d0661504.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827420 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/dev-mqueue.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827425 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/dev-mqueue.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827432 27751 manager.go:901] ignoring container "/system.slice/dev-mqueue.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827436 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-29c1400e918b28b9982788b4c8b07c46b1767efcd8b96000e44bc78da639994b.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827444 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-29c1400e918b28b9982788b4c8b07c46b1767efcd8b96000e44bc78da639994b.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827452 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-29c1400e918b28b9982788b4c8b07c46b1767efcd8b96000e44bc78da639994b.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827459 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/-.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827465 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/-.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827471 27751 manager.go:901] ignoring container "/system.slice/-.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827476 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-cd27b64716d8032aa3ea595ec121a222dc749f2f903149a780eb711106fb0c53.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827483 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-cd27b64716d8032aa3ea595ec121a222dc749f2f903149a780eb711106fb0c53.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827492 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-cd27b64716d8032aa3ea595ec121a222dc749f2f903149a780eb711106fb0c53.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827498 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/sys-kernel-debug.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827504 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/sys-kernel-debug.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827511 27751 manager.go:901] ignoring container "/system.slice/sys-kernel-debug.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827516 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-23d8892c53722cfb4d3aa6c5b126ad2674a7b8041b4873d30b98faefad6f5b44.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827523 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-23d8892c53722cfb4d3aa6c5b126ad2674a7b8041b4873d30b98faefad6f5b44.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827531 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-23d8892c53722cfb4d3aa6c5b126ad2674a7b8041b4873d30b98faefad6f5b44.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827541 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-7d2ebdadb8c541a4839058dd46b491ef9e14209821b47e7d86b539de402fbef5.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827549 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-7d2ebdadb8c541a4839058dd46b491ef9e14209821b47e7d86b539de402fbef5.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827558 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-7d2ebdadb8c541a4839058dd46b491ef9e14209821b47e7d86b539de402fbef5.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827565 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/boot.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827571 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/boot.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827577 27751 manager.go:901] ignoring container "/system.slice/boot.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827582 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/sys-kernel-config.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827588 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/sys-kernel-config.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827595 27751 manager.go:901] ignoring container "/system.slice/sys-kernel-config.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827600 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-984a35c7cf6d04c0af5fcd260872a7b6556aa9cd8b38bc1a1cf635972c1d6511.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827607 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-984a35c7cf6d04c0af5fcd260872a7b6556aa9cd8b38bc1a1cf635972c1d6511.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827616 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-984a35c7cf6d04c0af5fcd260872a7b6556aa9cd8b38bc1a1cf635972c1d6511.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827623 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-kubelet-pods-b77b0858\\x2dc9a8\\x2d11e7\\x2d89f4\\x2dc6b053eac242-volumes-kubernetes.io\\x7esecret-weave\\x2dnet\\x2dtoken\\x2drn6j7.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827631 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-kubelet-pods-b77b0858\\x2dc9a8\\x2d11e7\\x2d89f4\\x2dc6b053eac242-volumes-kubernetes.io\\x7esecret-weave\\x2dnet\\x2dtoken\\x2drn6j7.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827642 27751 manager.go:901] ignoring container "/system.slice/var-lib-kubelet-pods-b77b0858\\x2dc9a8\\x2d11e7\\x2d89f4\\x2dc6b053eac242-volumes-kubernetes.io\\x7esecret-weave\\x2dnet\\x2dtoken\\x2drn6j7.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827650 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-91510886f3beb621e5d04309d502352ff78392e46405f21633802da4f7047069-shm.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827657 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-91510886f3beb621e5d04309d502352ff78392e46405f21633802da4f7047069-shm.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827666 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-91510886f3beb621e5d04309d502352ff78392e46405f21633802da4f7047069-shm.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827672 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/u01-applicationSpace.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827678 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/u01-applicationSpace.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827685 27751 manager.go:901] ignoring container "/system.slice/u01-applicationSpace.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827690 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-9ba72245089e9b36e901a816a7446b004869c57dbf880db7daeb3edac0f81e51.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827697 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-9ba72245089e9b36e901a816a7446b004869c57dbf880db7daeb3edac0f81e51.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827705 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-9ba72245089e9b36e901a816a7446b004869c57dbf880db7daeb3edac0f81e51.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827728 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-6ddc96a63fb65ccd1743152170b39274e0eb3e4ccc4948bf2c0f5a6130ee9ce7.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827737 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-6ddc96a63fb65ccd1743152170b39274e0eb3e4ccc4948bf2c0f5a6130ee9ce7.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827745 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-6ddc96a63fb65ccd1743152170b39274e0eb3e4ccc4948bf2c0f5a6130ee9ce7.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827752 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/run-user-1000.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827758 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/run-user-1000.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827764 27751 manager.go:901] ignoring container "/system.slice/run-user-1000.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827769 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-be14b0552100f929052c8e94e1b7cf587c4218e321d811314e58f2cf312f1c0f-shm.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827777 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-be14b0552100f929052c8e94e1b7cf587c4218e321d811314e58f2cf312f1c0f-shm.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827785 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-be14b0552100f929052c8e94e1b7cf587c4218e321d811314e58f2cf312f1c0f-shm.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827792 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-kubelet-pods-9729c03a\\x2dc9a8\\x2d11e7\\x2d89f4\\x2dc6b053eac242-volumes-kubernetes.io\\x7esecret-kube\\x2dproxy\\x2dtoken\\x2dgqhfs.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827800 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-kubelet-pods-9729c03a\\x2dc9a8\\x2d11e7\\x2d89f4\\x2dc6b053eac242-volumes-kubernetes.io\\x7esecret-kube\\x2dproxy\\x2dtoken\\x2dgqhfs.mount", but ignoring.
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.827810 27751 manager.go:901] ignoring container "/system.slice/var-lib-kubelet-pods-9729c03a\\x2dc9a8\\x2d11e7\\x2d89f4\\x2dc6b053eac242-volumes-kubernetes.io\\x7esecret-kube\\x2dproxy\\x2dtoken\\x2dgqhfs.mount"
Nov 15 02:01:50 af867b kubelet[27751]: I1115 02:01:50.892425 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
Nov 15 02:01:51 af867b kubelet[27751]: I1115 02:01:51.683977 27751 eviction_manager.go:221] eviction manager: synchronize housekeeping
Nov 15 02:01:51 af867b kubelet[27751]: I1115 02:01:51.731864 27751 helpers.go:871] eviction manager: observations: signal=memory.available, available: 6483764Ki, capacity: 7393360Ki, time: 2017-11-15 02:01:46.953790411 +0000 UTC
Nov 15 02:01:51 af867b kubelet[27751]: I1115 02:01:51.731907 27751 helpers.go:871] eviction manager: observations: signal=nodefs.available, available: 7392020Ki, capacity: 10198Mi, time: 2017-11-15 02:01:46.953790411 +0000 UTC
Nov 15 02:01:51 af867b kubelet[27751]: I1115 02:01:51.731918 27751 helpers.go:871] eviction manager: observations: signal=nodefs.inodesFree, available: 10384617, capacity: 10208Ki, time: 2017-11-15 02:01:46.953790411 +0000 UTC
Nov 15 02:01:51 af867b kubelet[27751]: I1115 02:01:51.731926 27751 helpers.go:871] eviction manager: observations: signal=imagefs.available, available: 40779264Ki, capacity: 45Gi, time: 2017-11-15 02:01:46.953790411 +0000 UTC
Nov 15 02:01:51 af867b kubelet[27751]: I1115 02:01:51.731935 27751 helpers.go:873] eviction manager: observations: signal=allocatableMemory.available, available: 6934588Ki, capacity: 7393360Ki
Nov 15 02:01:51 af867b kubelet[27751]: I1115 02:01:51.731941 27751 helpers.go:873] eviction manager: observations: signal=allocatableNodeFs.available, available: 9624040228, capacity: 10198Mi
Nov 15 02:01:51 af867b kubelet[27751]: I1115 02:01:51.731958 27751 eviction_manager.go:325] eviction manager: no resources are starved
Nov 15 02:01:52 af867b kubelet[27751]: I1115 02:01:52.523462 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:01:52 af867b kubelet[27751]: I1115 02:01:52.959942 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 2379, Path: /health
Nov 15 02:01:52 af867b kubelet[27751]: I1115 02:01:52.959975 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:01:53 af867b kubelet[27751]: I1115 02:01:53.197611 27751 worker.go:164] Probe target container not found: kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242) - kubedns
Nov 15 02:01:53 af867b kubelet[27751]: I1115 02:01:53.211104 27751 http.go:96] Probe succeeded for http://127.0.0.1:2379/health, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:01:53 GMT] Content-Length:[18] Content-Type:[text/plain; charset=utf-8]] 0xc421a68ae0 18 [] true false map[] 0xc420b09900 <nil>}
Nov 15 02:01:53 af867b kubelet[27751]: I1115 02:01:53.211154 27751 prober.go:113] Liveness probe for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250):etcd" succeeded
Nov 15 02:01:53 af867b kubelet[27751]: I1115 02:01:53.617384 27751 prober.go:160] HTTP-Probe Host: https://127.0.0.1, Port: 6443, Path: /healthz
Nov 15 02:01:53 af867b kubelet[27751]: I1115 02:01:53.617421 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:01:53 af867b kubelet[27751]: I1115 02:01:53.623935 27751 http.go:96] Probe succeeded for https://127.0.0.1:6443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Type:[text/plain; charset=utf-8] Content-Length:[2] Date:[Wed, 15 Nov 2017 02:01:53 GMT]] 0xc421c300c0 2 [] false false map[] 0xc420ee2100 0xc420e78580}
Nov 15 02:01:53 af867b kubelet[27751]: I1115 02:01:53.623983 27751 prober.go:113] Liveness probe for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d):kube-apiserver" succeeded
Nov 15 02:01:53 af867b kubelet[27751]: I1115 02:01:53.834530 27751 worker.go:164] Probe target container not found: kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242) - dnsmasq
Nov 15 02:01:54 af867b kubelet[27751]: I1115 02:01:54.504581 27751 worker.go:164] Probe target container not found: kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242) - sidecar
Nov 15 02:01:54 af867b kubelet[27751]: I1115 02:01:54.523430 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:01:55 af867b kubelet[27751]: I1115 02:01:55.362751 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 6784, Path: /status
Nov 15 02:01:55 af867b kubelet[27751]: I1115 02:01:55.362783 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:01:55 af867b kubelet[27751]: I1115 02:01:55.364544 27751 http.go:96] Probe succeeded for http://127.0.0.1:6784/status, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:01:55 GMT] Content-Length:[445] Content-Type:[text/plain; charset=utf-8]] 0xc42014e320 445 [] true false map[] 0xc420ee2e00 <nil>}
Nov 15 02:01:55 af867b kubelet[27751]: I1115 02:01:55.364583 27751 prober.go:113] Liveness probe for "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242):weave" succeeded
Nov 15 02:01:55 af867b kubelet[27751]: I1115 02:01:55.894203 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
Nov 15 02:01:56 af867b kubelet[27751]: I1115 02:01:56.523448 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:01:57 af867b kubelet[27751]: I1115 02:01:57.506117 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10252, Path: /healthz
Nov 15 02:01:57 af867b kubelet[27751]: I1115 02:01:57.506155 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:01:57 af867b kubelet[27751]: I1115 02:01:57.507008 27751 http.go:96] Probe succeeded for http://127.0.0.1:10252/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Type:[text/plain; charset=utf-8] Date:[Wed, 15 Nov 2017 02:01:57 GMT] Content-Length:[2]] 0xc420f3e6c0 2 [] true false map[] 0xc4211a8100 <nil>}
Nov 15 02:01:57 af867b kubelet[27751]: I1115 02:01:57.507051 27751 prober.go:113] Liveness probe for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9):kube-controller-manager" succeeded
Nov 15 02:01:57 af867b kubelet[27751]: I1115 02:01:57.599531 27751 worker.go:164] Probe target container not found: kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242) - kubedns
Nov 15 02:01:57 af867b kubelet[27751]: I1115 02:01:57.681641 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10251, Path: /healthz
Nov 15 02:01:57 af867b kubelet[27751]: I1115 02:01:57.681665 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:01:57 af867b kubelet[27751]: I1115 02:01:57.683216 27751 http.go:96] Probe succeeded for http://127.0.0.1:10251/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:01:57 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc4214f7140 2 [] true false map[] 0xc4211a8e00 <nil>}
Nov 15 02:01:57 af867b kubelet[27751]: I1115 02:01:57.683274 27751 prober.go:113] Liveness probe for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373):kube-scheduler" succeeded
Nov 15 02:01:58 af867b kubelet[27751]: I1115 02:01:58.523446 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:02:00 af867b kubelet[27751]: I1115 02:02:00.498824 27751 kube_docker_client.go:330] Pulling image "gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5": "62106dacdb76: Extracting [====================================> ] 6.685MB/9.193MB"
Nov 15 02:02:00 af867b kubelet[27751]: I1115 02:02:00.528253 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:02:00 af867b kubelet[27751]: I1115 02:02:00.758862 27751 kube_docker_client.go:333] Stop pulling image "gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5": "Status: Downloaded newer image for gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5"
Nov 15 02:02:00 af867b kubelet[27751]: I1115 02:02:00.760121 27751 kuberuntime_container.go:100] Generating ref for container dnsmasq: &v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-545bc4bfd4-zvfqd", UID:"97270c63-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"477", FieldPath:"spec.containers{dnsmasq}"}
Nov 15 02:02:00 af867b kubelet[27751]: I1115 02:02:00.760164 27751 container_manager_linux.go:634] Calling devicePluginHandler AllocateDevices
Nov 15 02:02:00 af867b kubelet[27751]: I1115 02:02:00.760221 27751 kubelet_pods.go:123] container: kube-system/kube-dns-545bc4bfd4-zvfqd/dnsmasq podIP: "10.32.0.2" creating hosts mount: true
Nov 15 02:02:00 af867b kubelet[27751]: I1115 02:02:00.760671 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-545bc4bfd4-zvfqd", UID:"97270c63-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"477", FieldPath:"spec.containers{dnsmasq}"}): type: 'Normal' reason: 'Pulled' Successfully pulled image "gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5"
Nov 15 02:02:00 af867b kubelet[27751]: I1115 02:02:00.762040 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/burstable/pod97270c63-c9a8-11e7-89f4-c6b053eac242"
Nov 15 02:02:00 af867b kubelet[27751]: I1115 02:02:00.896997 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
Nov 15 02:02:00 af867b kubelet[27751]: I1115 02:02:00.917696 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-545bc4bfd4-zvfqd", UID:"97270c63-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"477", FieldPath:"spec.containers{dnsmasq}"}): type: 'Normal' reason: 'Created' Created container
Nov 15 02:02:01 af867b kubelet[27751]: I1115 02:02:01.072690 27751 factory.go:112] Using factory "docker" for container "/kubepods/burstable/pod97270c63-c9a8-11e7-89f4-c6b053eac242/49d5926d73b0ca6b1f1b8b2f76e9e3623ecaecb67d89db69f0f94b5dfe890a5a"
Nov 15 02:02:01 af867b kubelet[27751]: I1115 02:02:01.073665 27751 kuberuntime_manager.go:705] Creating container &Container{Name:sidecar,Image:gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5,Command:[],Args:[--v=2 --logtostderr --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A],WorkingDir:,Ports:[{metrics 0 10054 TCP }],Env:[],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {<nil>} 10m DecimalSI},memory: {{20971520 0} {<nil>} 20Mi BinarySI},},},VolumeMounts:[{kube-dns-token-987zv true /var/run/secrets/kubernetes.io/serviceaccount <nil>}],LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/metrics,Port:10054,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[],TerminationMessagePolicy:File,} in pod kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)
Nov 15 02:02:01 af867b kubelet[27751]: I1115 02:02:01.073943 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-545bc4bfd4-zvfqd", UID:"97270c63-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"477", FieldPath:"spec.containers{dnsmasq}"}): type: 'Normal' reason: 'Started' Started container
Nov 15 02:02:01 af867b kubelet[27751]: I1115 02:02:01.075623 27751 kuberuntime_image.go:46] Pulling image "gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5" without credentials
Nov 15 02:02:01 af867b kubelet[27751]: I1115 02:02:01.075698 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-545bc4bfd4-zvfqd", UID:"97270c63-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"477", FieldPath:"spec.containers{sidecar}"}): type: 'Normal' reason: 'Pulling' pulling image "gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5"
Nov 15 02:02:01 af867b kubelet[27751]: I1115 02:02:01.075749 27751 manager.go:932] Added container: "/kubepods/burstable/pod97270c63-c9a8-11e7-89f4-c6b053eac242/49d5926d73b0ca6b1f1b8b2f76e9e3623ecaecb67d89db69f0f94b5dfe890a5a" (aliases: [k8s_dnsmasq_kube-dns-545bc4bfd4-zvfqd_kube-system_97270c63-c9a8-11e7-89f4-c6b053eac242_0 49d5926d73b0ca6b1f1b8b2f76e9e3623ecaecb67d89db69f0f94b5dfe890a5a], namespace: "docker")
Nov 15 02:02:01 af867b kubelet[27751]: I1115 02:02:01.075878 27751 handler.go:325] Added event &{/kubepods/burstable/pod97270c63-c9a8-11e7-89f4-c6b053eac242/49d5926d73b0ca6b1f1b8b2f76e9e3623ecaecb67d89db69f0f94b5dfe890a5a 2017-11-15 02:02:01.000194711 +0000 UTC containerCreation {<nil>}}
Nov 15 02:02:01 af867b kubelet[27751]: I1115 02:02:01.075927 27751 container.go:409] Start housekeeping for container "/kubepods/burstable/pod97270c63-c9a8-11e7-89f4-c6b053eac242/49d5926d73b0ca6b1f1b8b2f76e9e3623ecaecb67d89db69f0f94b5dfe890a5a"
Nov 15 02:02:01 af867b kubelet[27751]: I1115 02:02:01.256577 27751 generic.go:146] GenericPLEG: 97270c63-c9a8-11e7-89f4-c6b053eac242/49d5926d73b0ca6b1f1b8b2f76e9e3623ecaecb67d89db69f0f94b5dfe890a5a: non-existent -> running
Nov 15 02:02:01 af867b kubelet[27751]: I1115 02:02:01.257422 27751 kuberuntime_manager.go:833] getSandboxIDByPodUID got sandbox IDs ["57f84a53f69dda718423e2a72b069d129afa4d226103816e6fa21ee5fab8b82b"] for pod "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:02:01 af867b kubelet[27751]: I1115 02:02:01.266524 27751 generic.go:345] PLEG: Write status for kube-dns-545bc4bfd4-zvfqd/kube-system: &container.PodStatus{ID:"97270c63-c9a8-11e7-89f4-c6b053eac242", Name:"kube-dns-545bc4bfd4-zvfqd", Namespace:"kube-system", IP:"10.32.0.2", ContainerStatuses:[]*container.ContainerStatus{(*container.ContainerStatus)(0xc421a07a40), (*container.ContainerStatus)(0xc421a07c00)}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc4214ed4f0)}} (err: <nil>)
Nov 15 02:02:01 af867b kubelet[27751]: I1115 02:02:01.266599 27751 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)", event: &pleg.PodLifecycleEvent{ID:"97270c63-c9a8-11e7-89f4-c6b053eac242", Type:"ContainerStarted", Data:"49d5926d73b0ca6b1f1b8b2f76e9e3623ecaecb67d89db69f0f94b5dfe890a5a"}
Nov 15 02:02:01 af867b kubelet[27751]: I1115 02:02:01.732127 27751 eviction_manager.go:221] eviction manager: synchronize housekeeping
Nov 15 02:02:01 af867b kubelet[27751]: I1115 02:02:01.783017 27751 helpers.go:871] eviction manager: observations: signal=nodefs.available, available: 7392020Ki, capacity: 10198Mi, time: 2017-11-15 02:01:46.953790411 +0000 UTC
Nov 15 02:02:01 af867b kubelet[27751]: I1115 02:02:01.783071 27751 helpers.go:871] eviction manager: observations: signal=nodefs.inodesFree, available: 10384617, capacity: 10208Ki, time: 2017-11-15 02:01:46.953790411 +0000 UTC
Nov 15 02:02:01 af867b kubelet[27751]: I1115 02:02:01.783084 27751 helpers.go:871] eviction manager: observations: signal=imagefs.available, available: 40779264Ki, capacity: 45Gi, time: 2017-11-15 02:01:46.953790411 +0000 UTC
Nov 15 02:02:01 af867b kubelet[27751]: I1115 02:02:01.783095 27751 helpers.go:873] eviction manager: observations: signal=allocatableMemory.available, available: 6933156Ki, capacity: 7393360Ki
Nov 15 02:02:01 af867b kubelet[27751]: I1115 02:02:01.783104 27751 helpers.go:873] eviction manager: observations: signal=allocatableNodeFs.available, available: 9624040228, capacity: 10198Mi
Nov 15 02:02:01 af867b kubelet[27751]: I1115 02:02:01.783114 27751 helpers.go:871] eviction manager: observations: signal=memory.available, available: 6483764Ki, capacity: 7393360Ki, time: 2017-11-15 02:01:46.953790411 +0000 UTC
Nov 15 02:02:01 af867b kubelet[27751]: I1115 02:02:01.783139 27751 eviction_manager.go:325] eviction manager: no resources are starved
Nov 15 02:02:02 af867b kubelet[27751]: I1115 02:02:02.523464 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:02:02 af867b kubelet[27751]: I1115 02:02:02.959981 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 2379, Path: /health
Nov 15 02:02:02 af867b kubelet[27751]: I1115 02:02:02.960027 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:02:03 af867b kubelet[27751]: I1115 02:02:03.197632 27751 worker.go:164] Probe target container not found: kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242) - kubedns
Nov 15 02:02:03 af867b kubelet[27751]: I1115 02:02:03.211645 27751 http.go:96] Probe succeeded for http://127.0.0.1:2379/health, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:02:03 GMT] Content-Length:[18] Content-Type:[text/plain; charset=utf-8]] 0xc421914d60 18 [] true false map[] 0xc420afda00 <nil>}
Nov 15 02:02:03 af867b kubelet[27751]: I1115 02:02:03.211697 27751 prober.go:113] Liveness probe for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250):etcd" succeeded
Nov 15 02:02:03 af867b kubelet[27751]: I1115 02:02:03.617264 27751 prober.go:160] HTTP-Probe Host: https://127.0.0.1, Port: 6443, Path: /healthz
Nov 15 02:02:03 af867b kubelet[27751]: I1115 02:02:03.617302 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:02:03 af867b kubelet[27751]: I1115 02:02:03.625346 27751 http.go:96] Probe succeeded for https://127.0.0.1:6443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Length:[2] Date:[Wed, 15 Nov 2017 02:02:03 GMT] Content-Type:[text/plain; charset=utf-8]] 0xc42186a980 2 [] false false map[] 0xc42110a000 0xc4215dc9a0}
Nov 15 02:02:03 af867b kubelet[27751]: I1115 02:02:03.625386 27751 prober.go:113] Liveness probe for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d):kube-apiserver" succeeded
Nov 15 02:02:03 af867b kubelet[27751]: I1115 02:02:03.834498 27751 worker.go:164] Probe target container not found: kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242) - dnsmasq
Nov 15 02:02:04 af867b kubelet[27751]: I1115 02:02:04.504601 27751 worker.go:164] Probe target container not found: kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242) - sidecar
Nov 15 02:02:04 af867b kubelet[27751]: I1115 02:02:04.523446 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:02:05 af867b kubelet[27751]: I1115 02:02:05.362708 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 6784, Path: /status
Nov 15 02:02:05 af867b kubelet[27751]: I1115 02:02:05.362737 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:02:05 af867b kubelet[27751]: I1115 02:02:05.364098 27751 http.go:96] Probe succeeded for http://127.0.0.1:6784/status, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:02:05 GMT] Content-Length:[445] Content-Type:[text/plain; charset=utf-8]] 0xc421101a80 445 [] true false map[] 0xc4211a9900 <nil>}
Nov 15 02:02:05 af867b kubelet[27751]: I1115 02:02:05.364144 27751 prober.go:113] Liveness probe for "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242):weave" succeeded
Nov 15 02:02:05 af867b kubelet[27751]: I1115 02:02:05.898259 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
Nov 15 02:02:06 af867b kubelet[27751]: I1115 02:02:06.523454 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:02:07 af867b kubelet[27751]: I1115 02:02:07.506101 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10252, Path: /healthz
Nov 15 02:02:07 af867b kubelet[27751]: I1115 02:02:07.506135 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:02:07 af867b kubelet[27751]: I1115 02:02:07.507912 27751 http.go:96] Probe succeeded for http://127.0.0.1:10252/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:02:07 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc4216f1320 2 [] true false map[] 0xc420ee2600 <nil>}
Nov 15 02:02:07 af867b kubelet[27751]: I1115 02:02:07.507954 27751 prober.go:113] Liveness probe for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9):kube-controller-manager" succeeded
Nov 15 02:02:07 af867b kubelet[27751]: I1115 02:02:07.599520 27751 worker.go:164] Probe target container not found: kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242) - kubedns
Nov 15 02:02:07 af867b kubelet[27751]: I1115 02:02:07.681619 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10251, Path: /healthz
Nov 15 02:02:07 af867b kubelet[27751]: I1115 02:02:07.681640 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:02:07 af867b kubelet[27751]: I1115 02:02:07.682651 27751 http.go:96] Probe succeeded for http://127.0.0.1:10251/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:02:07 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc421c300a0 2 [] true false map[] 0xc420ee2f00 <nil>}
Nov 15 02:02:07 af867b kubelet[27751]: I1115 02:02:07.682690 27751 prober.go:113] Liveness probe for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373):kube-scheduler" succeeded
Nov 15 02:02:08 af867b kubelet[27751]: I1115 02:02:08.523449 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:02:10 af867b kubelet[27751]: I1115 02:02:10.198378 27751 kube_docker_client.go:333] Stop pulling image "gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5": "Status: Downloaded newer image for gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5"
Nov 15 02:02:10 af867b kubelet[27751]: I1115 02:02:10.199624 27751 kuberuntime_container.go:100] Generating ref for container sidecar: &v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-545bc4bfd4-zvfqd", UID:"97270c63-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"477", FieldPath:"spec.containers{sidecar}"}
Nov 15 02:02:10 af867b kubelet[27751]: I1115 02:02:10.199671 27751 container_manager_linux.go:634] Calling devicePluginHandler AllocateDevices
Nov 15 02:02:10 af867b kubelet[27751]: I1115 02:02:10.199742 27751 kubelet_pods.go:123] container: kube-system/kube-dns-545bc4bfd4-zvfqd/sidecar podIP: "10.32.0.2" creating hosts mount: true
Nov 15 02:02:10 af867b kubelet[27751]: I1115 02:02:10.200380 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-545bc4bfd4-zvfqd", UID:"97270c63-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"477", FieldPath:"spec.containers{sidecar}"}): type: 'Normal' reason: 'Pulled' Successfully pulled image "gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5"
Nov 15 02:02:10 af867b kubelet[27751]: W1115 02:02:10.201773 27751 kuberuntime_container.go:191] Non-root verification doesn't support non-numeric user (nobody)
Nov 15 02:02:10 af867b kubelet[27751]: I1115 02:02:10.207112 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/burstable/pod97270c63-c9a8-11e7-89f4-c6b053eac242"
Nov 15 02:02:10 af867b kubelet[27751]: I1115 02:02:10.420911 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-545bc4bfd4-zvfqd", UID:"97270c63-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"477", FieldPath:"spec.containers{sidecar}"}): type: 'Normal' reason: 'Created' Created container
Nov 15 02:02:10 af867b kubelet[27751]: I1115 02:02:10.523403 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:02:10 af867b kubelet[27751]: I1115 02:02:10.561246 27751 factory.go:112] Using factory "docker" for container "/kubepods/burstable/pod97270c63-c9a8-11e7-89f4-c6b053eac242/12935c093ed0d04807a97f6b78b64a7a87f39b36f15aad612e8e47dba47ac96f"
Nov 15 02:02:10 af867b kubelet[27751]: I1115 02:02:10.571800 27751 manager.go:932] Added container: "/kubepods/burstable/pod97270c63-c9a8-11e7-89f4-c6b053eac242/12935c093ed0d04807a97f6b78b64a7a87f39b36f15aad612e8e47dba47ac96f" (aliases: [k8s_sidecar_kube-dns-545bc4bfd4-zvfqd_kube-system_97270c63-c9a8-11e7-89f4-c6b053eac242_0 12935c093ed0d04807a97f6b78b64a7a87f39b36f15aad612e8e47dba47ac96f], namespace: "docker")
Nov 15 02:02:10 af867b kubelet[27751]: I1115 02:02:10.572406 27751 handler.go:325] Added event &{/kubepods/burstable/pod97270c63-c9a8-11e7-89f4-c6b053eac242/12935c093ed0d04807a97f6b78b64a7a87f39b36f15aad612e8e47dba47ac96f 2017-11-15 02:02:10.485194711 +0000 UTC containerCreation {<nil>}}
Nov 15 02:02:10 af867b kubelet[27751]: I1115 02:02:10.573602 27751 container.go:409] Start housekeeping for container "/kubepods/burstable/pod97270c63-c9a8-11e7-89f4-c6b053eac242/12935c093ed0d04807a97f6b78b64a7a87f39b36f15aad612e8e47dba47ac96f"
Nov 15 02:02:10 af867b kubelet[27751]: I1115 02:02:10.585585 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-545bc4bfd4-zvfqd", UID:"97270c63-c9a8-11e7-89f4-c6b053eac242", APIVersion:"v1", ResourceVersion:"477", FieldPath:"spec.containers{sidecar}"}): type: 'Normal' reason: 'Started' Started container
Nov 15 02:02:10 af867b kubelet[27751]: I1115 02:02:10.899758 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
Nov 15 02:02:11 af867b kubelet[27751]: I1115 02:02:11.312027 27751 generic.go:146] GenericPLEG: 97270c63-c9a8-11e7-89f4-c6b053eac242/12935c093ed0d04807a97f6b78b64a7a87f39b36f15aad612e8e47dba47ac96f: non-existent -> running
Nov 15 02:02:11 af867b kubelet[27751]: I1115 02:02:11.313617 27751 kuberuntime_manager.go:833] getSandboxIDByPodUID got sandbox IDs ["57f84a53f69dda718423e2a72b069d129afa4d226103816e6fa21ee5fab8b82b"] for pod "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:02:11 af867b kubelet[27751]: I1115 02:02:11.326550 27751 generic.go:345] PLEG: Write status for kube-dns-545bc4bfd4-zvfqd/kube-system: &container.PodStatus{ID:"97270c63-c9a8-11e7-89f4-c6b053eac242", Name:"kube-dns-545bc4bfd4-zvfqd", Namespace:"kube-system", IP:"10.32.0.2", ContainerStatuses:[]*container.ContainerStatus{(*container.ContainerStatus)(0xc421cc6700), (*container.ContainerStatus)(0xc421a06e00), (*container.ContainerStatus)(0xc421cc69a0)}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc421ee1180)}} (err: <nil>)
Nov 15 02:02:11 af867b kubelet[27751]: I1115 02:02:11.326621 27751 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)", event: &pleg.PodLifecycleEvent{ID:"97270c63-c9a8-11e7-89f4-c6b053eac242", Type:"ContainerStarted", Data:"12935c093ed0d04807a97f6b78b64a7a87f39b36f15aad612e8e47dba47ac96f"}
Nov 15 02:02:11 af867b kubelet[27751]: I1115 02:02:11.326685 27751 kubelet_pods.go:1284] Generating status for "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:02:11 af867b kubelet[27751]: I1115 02:02:11.326992 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:02:11 af867b kubelet[27751]: I1115 02:02:11.336692 27751 config.go:282] Setting pods for source api
Nov 15 02:02:11 af867b kubelet[27751]: I1115 02:02:11.338509 27751 kubelet.go:1850] SyncLoop (RECONCILE, "api"): "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:02:11 af867b kubelet[27751]: I1115 02:02:11.336857 27751 status_manager.go:451] Status for pod "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)" updated successfully: (2, {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:01:21 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:01:21 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kubedns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:01:21 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.32.0.2 StartTime:2017-11-15 02:01:21 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:dnsmasq State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 02:02:01 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5 ImageID:docker-pullable://gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64@sha256:46b933bb70270c8a02fa6b6f87d440f6f1fce1a5a2a719e164f83f7b109f7544 ContainerID:docker://49d5926d73b0ca6b1f1b8b2f76e9e3623ecaecb67d89db69f0f94b5dfe890a5a} {Name:kubedns State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 02:01:48 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5 ImageID:docker-pullable://gcr.io/google_containers/k8s-dns-kube-dns-amd64@sha256:1a3fc069de481ae690188f6f1ba4664b5cc7760af37120f70c86505c79eea61d ContainerID:docker://e430c4a0a025c95f870505c4ec608f56ad8cd6517f0e425139fb82fb5bfd6211} {Name:sidecar State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 02:02:10 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Ima
Nov 15 02:02:11 af867b kubelet[27751]: ge:gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5 ImageID:docker-pullable://gcr.io/google_containers/k8s-dns-sidecar-amd64@sha256:9aab42bf6a2a068b797fe7d91a5d8d915b10dbbc3d6f2b10492848debfba6044 ContainerID:docker://12935c093ed0d04807a97f6b78b64a7a87f39b36f15aad612e8e47dba47ac96f}] QOSClass:Burstable})
Nov 15 02:02:11 af867b kubelet[27751]: I1115 02:02:11.371074 27751 configmap.go:187] Setting up volume kube-dns-config for pod 97270c63-c9a8-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/97270c63-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~configmap/kube-dns-config
Nov 15 02:02:11 af867b kubelet[27751]: I1115 02:02:11.371465 27751 secret.go:186] Setting up volume kube-dns-token-987zv for pod 97270c63-c9a8-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/97270c63-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/kube-dns-token-987zv
Nov 15 02:02:11 af867b kubelet[27751]: I1115 02:02:11.373530 27751 configmap.go:218] Received configMap kube-system/kube-dns containing (0) pieces of data, 0 total bytes
Nov 15 02:02:11 af867b kubelet[27751]: I1115 02:02:11.373612 27751 atomic_writer.go:142] pod kube-system/kube-dns-545bc4bfd4-zvfqd volume kube-dns-config: no update required for target directory /var/lib/kubelet/pods/97270c63-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~configmap/kube-dns-config
Nov 15 02:02:11 af867b kubelet[27751]: I1115 02:02:11.374372 27751 secret.go:217] Received secret kube-system/kube-dns-token-987zv containing (3) pieces of data, 1896 total bytes
Nov 15 02:02:11 af867b kubelet[27751]: I1115 02:02:11.374519 27751 atomic_writer.go:142] pod kube-system/kube-dns-545bc4bfd4-zvfqd volume kube-dns-token-987zv: no update required for target directory /var/lib/kubelet/pods/97270c63-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/kube-dns-token-987zv
Nov 15 02:02:11 af867b kubelet[27751]: I1115 02:02:11.627250 27751 volume_manager.go:366] All volumes are attached and mounted for pod "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:02:11 af867b kubelet[27751]: I1115 02:02:11.627538 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:57f84a53f69dda718423e2a72b069d129afa4d226103816e6fa21ee5fab8b82b Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:02:11 af867b kubelet[27751]: I1115 02:02:11.783298 27751 eviction_manager.go:221] eviction manager: synchronize housekeeping
Nov 15 02:02:11 af867b kubelet[27751]: I1115 02:02:11.840972 27751 helpers.go:871] eviction manager: observations: signal=memory.available, available: 6470564Ki, capacity: 7393360Ki, time: 2017-11-15 02:02:06.156376344 +0000 UTC
Nov 15 02:02:11 af867b kubelet[27751]: I1115 02:02:11.841011 27751 helpers.go:871] eviction manager: observations: signal=nodefs.available, available: 7404548Ki, capacity: 10198Mi, time: 2017-11-15 02:02:06.156376344 +0000 UTC
Nov 15 02:02:11 af867b kubelet[27751]: I1115 02:02:11.841021 27751 helpers.go:871] eviction manager: observations: signal=nodefs.inodesFree, available: 10384529, capacity: 10208Ki, time: 2017-11-15 02:02:06.156376344 +0000 UTC
Nov 15 02:02:11 af867b kubelet[27751]: I1115 02:02:11.841029 27751 helpers.go:871] eviction manager: observations: signal=imagefs.available, available: 39696Mi, capacity: 45Gi, time: 2017-11-15 02:02:06.156376344 +0000 UTC
Nov 15 02:02:11 af867b kubelet[27751]: I1115 02:02:11.841036 27751 helpers.go:873] eviction manager: observations: signal=allocatableMemory.available, available: 6923584Ki, capacity: 7393360Ki
Nov 15 02:02:11 af867b kubelet[27751]: I1115 02:02:11.841042 27751 helpers.go:873] eviction manager: observations: signal=allocatableNodeFs.available, available: 9624040228, capacity: 10198Mi
Nov 15 02:02:11 af867b kubelet[27751]: I1115 02:02:11.841061 27751 eviction_manager.go:325] eviction manager: no resources are starved
Nov 15 02:02:12 af867b kubelet[27751]: I1115 02:02:12.331202 27751 kubelet_pods.go:1284] Generating status for "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:02:12 af867b kubelet[27751]: I1115 02:02:12.331420 27751 status_manager.go:325] Ignoring same status for pod "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:01:21 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:01:21 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kubedns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:01:21 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.32.0.2 StartTime:2017-11-15 02:01:21 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:dnsmasq State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 02:02:01 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5 ImageID:docker-pullable://gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64@sha256:46b933bb70270c8a02fa6b6f87d440f6f1fce1a5a2a719e164f83f7b109f7544 ContainerID:docker://49d5926d73b0ca6b1f1b8b2f76e9e3623ecaecb67d89db69f0f94b5dfe890a5a} {Name:kubedns State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 02:01:48 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5 ImageID:docker-pullable://gcr.io/google_containers/k8s-dns-kube-dns-amd64@sha256:1a3fc069de481ae690188f6f1ba4664b5cc7760af37120f70c86505c79eea61d ContainerID:docker://e430c4a0a025c95f870505c4ec608f56ad8cd6517f0e425139fb82fb5bfd6211} {Name:sidecar State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 02:02:10 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:
Nov 15 02:02:12 af867b kubelet[27751]: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5 ImageID:docker-pullable://gcr.io/google_containers/k8s-dns-sidecar-amd64@sha256:9aab42bf6a2a068b797fe7d91a5d8d915b10dbbc3d6f2b10492848debfba6044 ContainerID:docker://12935c093ed0d04807a97f6b78b64a7a87f39b36f15aad612e8e47dba47ac96f}] QOSClass:Burstable}
Nov 15 02:02:12 af867b kubelet[27751]: I1115 02:02:12.331616 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:02:12 af867b kubelet[27751]: I1115 02:02:12.373498 27751 configmap.go:187] Setting up volume kube-dns-config for pod 97270c63-c9a8-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/97270c63-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~configmap/kube-dns-config
Nov 15 02:02:12 af867b kubelet[27751]: I1115 02:02:12.373499 27751 secret.go:186] Setting up volume kube-dns-token-987zv for pod 97270c63-c9a8-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/97270c63-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/kube-dns-token-987zv
Nov 15 02:02:12 af867b kubelet[27751]: I1115 02:02:12.377133 27751 secret.go:217] Received secret kube-system/kube-dns-token-987zv containing (3) pieces of data, 1896 total bytes
Nov 15 02:02:12 af867b kubelet[27751]: I1115 02:02:12.377333 27751 atomic_writer.go:142] pod kube-system/kube-dns-545bc4bfd4-zvfqd volume kube-dns-token-987zv: no update required for target directory /var/lib/kubelet/pods/97270c63-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/kube-dns-token-987zv
Nov 15 02:02:12 af867b kubelet[27751]: I1115 02:02:12.377539 27751 configmap.go:218] Received configMap kube-system/kube-dns containing (0) pieces of data, 0 total bytes
Nov 15 02:02:12 af867b kubelet[27751]: I1115 02:02:12.377577 27751 atomic_writer.go:142] pod kube-system/kube-dns-545bc4bfd4-zvfqd volume kube-dns-config: no update required for target directory /var/lib/kubelet/pods/97270c63-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~configmap/kube-dns-config
Nov 15 02:02:12 af867b kubelet[27751]: I1115 02:02:12.523453 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:02:12 af867b kubelet[27751]: I1115 02:02:12.631843 27751 volume_manager.go:366] All volumes are attached and mounted for pod "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:02:12 af867b kubelet[27751]: I1115 02:02:12.632103 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:57f84a53f69dda718423e2a72b069d129afa4d226103816e6fa21ee5fab8b82b Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:02:12 af867b kubelet[27751]: I1115 02:02:12.959944 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 2379, Path: /health
Nov 15 02:02:12 af867b kubelet[27751]: I1115 02:02:12.959970 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:02:13 af867b kubelet[27751]: I1115 02:02:13.210831 27751 http.go:96] Probe succeeded for http://127.0.0.1:2379/health, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:02:13 GMT] Content-Length:[18] Content-Type:[text/plain; charset=utf-8]] 0xc421210fe0 18 [] true false map[] 0xc420afc300 <nil>}
Nov 15 02:02:13 af867b kubelet[27751]: I1115 02:02:13.210887 27751 prober.go:113] Liveness probe for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250):etcd" succeeded
Nov 15 02:02:13 af867b kubelet[27751]: I1115 02:02:13.617255 27751 prober.go:160] HTTP-Probe Host: https://127.0.0.1, Port: 6443, Path: /healthz
Nov 15 02:02:13 af867b kubelet[27751]: I1115 02:02:13.617293 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:02:13 af867b kubelet[27751]: I1115 02:02:13.624277 27751 http.go:96] Probe succeeded for https://127.0.0.1:6443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Type:[text/plain; charset=utf-8] Content-Length:[2] Date:[Wed, 15 Nov 2017 02:02:13 GMT]] 0xc421347aa0 2 [] false false map[] 0xc420afca00 0xc42132c580}
Nov 15 02:02:13 af867b kubelet[27751]: I1115 02:02:13.624322 27751 prober.go:113] Liveness probe for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d):kube-apiserver" succeeded
Nov 15 02:02:14 af867b kubelet[27751]: I1115 02:02:14.523425 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:02:15 af867b kubelet[27751]: I1115 02:02:15.362686 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 6784, Path: /status
Nov 15 02:02:15 af867b kubelet[27751]: I1115 02:02:15.362708 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:02:15 af867b kubelet[27751]: I1115 02:02:15.364092 27751 http.go:96] Probe succeeded for http://127.0.0.1:6784/status, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Type:[text/plain; charset=utf-8] Date:[Wed, 15 Nov 2017 02:02:15 GMT] Content-Length:[445]] 0xc4211006c0 445 [] true false map[] 0xc4211a9500 <nil>}
Nov 15 02:02:15 af867b kubelet[27751]: I1115 02:02:15.364130 27751 prober.go:113] Liveness probe for "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242):weave" succeeded
Nov 15 02:02:15 af867b kubelet[27751]: I1115 02:02:15.523438 27751 kubelet.go:1890] SyncLoop (SYNC): 1 pods; kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)
Nov 15 02:02:15 af867b kubelet[27751]: I1115 02:02:15.523519 27751 kubelet_pods.go:1284] Generating status for "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:02:15 af867b kubelet[27751]: I1115 02:02:15.523674 27751 status_manager.go:325] Ignoring same status for pod "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:59:18 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:59:20 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:59:20 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 01:59:18 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-proxy State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 01:59:20 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/google_containers/kube-proxy-amd64:v1.8.3 ImageID:docker-pullable://gcr.io/google_containers/kube-proxy-amd64@sha256:63210bc9690144d41126a646caf03a3d76ddc6d06b8bad119d468193c3e90c24 ContainerID:docker://7ea397ec6048e25ce044b9edba43fe0ef2ed54803c9f3516c7b6e780d2c60a34}] QOSClass:BestEffort}
Nov 15 02:02:15 af867b kubelet[27751]: I1115 02:02:15.523852 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:02:15 af867b kubelet[27751]: I1115 02:02:15.580233 27751 secret.go:186] Setting up volume kube-proxy-token-gqhfs for pod 9729c03a-c9a8-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/9729c03a-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/kube-proxy-token-gqhfs
Nov 15 02:02:15 af867b kubelet[27751]: I1115 02:02:15.580236 27751 configmap.go:187] Setting up volume kube-proxy for pod 9729c03a-c9a8-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/9729c03a-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~configmap/kube-proxy
Nov 15 02:02:15 af867b kubelet[27751]: I1115 02:02:15.582739 27751 secret.go:217] Received secret kube-system/kube-proxy-token-gqhfs containing (3) pieces of data, 1904 total bytes
Nov 15 02:02:15 af867b kubelet[27751]: I1115 02:02:15.582807 27751 configmap.go:218] Received configMap kube-system/kube-proxy containing (1) pieces of data, 407 total bytes
Nov 15 02:02:15 af867b kubelet[27751]: I1115 02:02:15.582894 27751 atomic_writer.go:142] pod kube-system/kube-proxy-nnsjf volume kube-proxy-token-gqhfs: no update required for target directory /var/lib/kubelet/pods/9729c03a-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/kube-proxy-token-gqhfs
Nov 15 02:02:15 af867b kubelet[27751]: I1115 02:02:15.582903 27751 atomic_writer.go:142] pod kube-system/kube-proxy-nnsjf volume kube-proxy: no update required for target directory /var/lib/kubelet/pods/9729c03a-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~configmap/kube-proxy
Nov 15 02:02:15 af867b kubelet[27751]: I1115 02:02:15.825852 27751 volume_manager.go:366] All volumes are attached and mounted for pod "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:02:15 af867b kubelet[27751]: I1115 02:02:15.825972 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:91510886f3beb621e5d04309d502352ff78392e46405f21633802da4f7047069 Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:02:15 af867b kubelet[27751]: I1115 02:02:15.904598 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
Nov 15 02:02:16 af867b kubelet[27751]: I1115 02:02:16.523453 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:02:17 af867b kubelet[27751]: I1115 02:02:17.506784 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10252, Path: /healthz
Nov 15 02:02:17 af867b kubelet[27751]: I1115 02:02:17.506824 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:02:17 af867b kubelet[27751]: I1115 02:02:17.508698 27751 http.go:96] Probe succeeded for http://127.0.0.1:10252/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:02:17 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc42192c060 2 [] true false map[] 0xc420c91b00 <nil>}
Nov 15 02:02:17 af867b kubelet[27751]: I1115 02:02:17.508763 27751 prober.go:113] Liveness probe for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9):kube-controller-manager" succeeded
Nov 15 02:02:17 af867b kubelet[27751]: I1115 02:02:17.599548 27751 prober.go:160] HTTP-Probe Host: http://10.32.0.2, Port: 8081, Path: /readiness
Nov 15 02:02:17 af867b kubelet[27751]: I1115 02:02:17.599577 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:02:17 af867b kubelet[27751]: I1115 02:02:17.599911 27751 status_manager.go:203] Container readiness unchanged (false): "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)" - "docker://e430c4a0a025c95f870505c4ec608f56ad8cd6517f0e425139fb82fb5bfd6211"
Nov 15 02:02:17 af867b kubelet[27751]: I1115 02:02:17.601188 27751 http.go:96] Probe succeeded for http://10.32.0.2:8081/readiness, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:02:17 GMT] Content-Length:[3] Content-Type:[text/plain; charset=utf-8]] 0xc421d02300 3 [] true false map[] 0xc420b09400 <nil>}
Nov 15 02:02:17 af867b kubelet[27751]: I1115 02:02:17.601244 27751 prober.go:113] Readiness probe for "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242):kubedns" succeeded
Nov 15 02:02:17 af867b kubelet[27751]: I1115 02:02:17.609030 27751 config.go:282] Setting pods for source api
Nov 15 02:02:17 af867b kubelet[27751]: I1115 02:02:17.610347 27751 kubelet.go:1850] SyncLoop (RECONCILE, "api"): "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:02:17 af867b kubelet[27751]: I1115 02:02:17.613936 27751 status_manager.go:451] Status for pod "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242)" updated successfully: (3, {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:01:21 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:02:17 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:01:21 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.32.0.2 StartTime:2017-11-15 02:01:21 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:dnsmasq State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 02:02:01 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5 ImageID:docker-pullable://gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64@sha256:46b933bb70270c8a02fa6b6f87d440f6f1fce1a5a2a719e164f83f7b109f7544 ContainerID:docker://49d5926d73b0ca6b1f1b8b2f76e9e3623ecaecb67d89db69f0f94b5dfe890a5a} {Name:kubedns State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 02:01:48 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5 ImageID:docker-pullable://gcr.io/google_containers/k8s-dns-kube-dns-amd64@sha256:1a3fc069de481ae690188f6f1ba4664b5cc7760af37120f70c86505c79eea61d ContainerID:docker://e430c4a0a025c95f870505c4ec608f56ad8cd6517f0e425139fb82fb5bfd6211} {Name:sidecar State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 02:02:10 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5 Imag
Nov 15 02:02:17 af867b kubelet[27751]: eID:docker-pullable://gcr.io/google_containers/k8s-dns-sidecar-amd64@sha256:9aab42bf6a2a068b797fe7d91a5d8d915b10dbbc3d6f2b10492848debfba6044 ContainerID:docker://12935c093ed0d04807a97f6b78b64a7a87f39b36f15aad612e8e47dba47ac96f}] QOSClass:Burstable})
Nov 15 02:02:17 af867b kubelet[27751]: I1115 02:02:17.681660 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10251, Path: /healthz
Nov 15 02:02:17 af867b kubelet[27751]: I1115 02:02:17.681696 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:02:17 af867b kubelet[27751]: I1115 02:02:17.683472 27751 http.go:96] Probe succeeded for http://127.0.0.1:10251/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:02:17 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc421d22100 2 [] true false map[] 0xc420b09c00 <nil>}
Nov 15 02:02:17 af867b kubelet[27751]: I1115 02:02:17.683512 27751 prober.go:113] Liveness probe for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373):kube-scheduler" succeeded
Nov 15 02:02:18 af867b kubelet[27751]: I1115 02:02:18.523431 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:02:20 af867b kubelet[27751]: I1115 02:02:20.523423 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:02:20 af867b kubelet[27751]: I1115 02:02:20.912239 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
Nov 15 02:16:23 af867b kubelet[27751]: I1115 02:16:23.199466 27751 prober.go:113] Liveness probe for "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242):kubedns" succeede
Nov 15 02:16:23 af867b kubelet[27751]: I1115 02:16:23.463080 27751 http.go:96] Probe succeeded for http://127.0.0.1:2379/health, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:1:23 GMT] Content-Length:[18] Content-Type:[text/plain; charset=utf-8]] 0xc421528840 18 [] true false map[] 0xc420ee2e00 <nil>}
Nov 15 02:16:23 af867b kubelet[27751]: I1115 02:16:23.463189 27751 prober.go:113] Liveness probe for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250):etcd" succeeded
Nov 15 02:16:23 af867b kubelet[27751]: I1115 02:16:23.617350 27751 prober.go:160] HTTP-Probe Host: https://127.0.0.1, Port: 6443, Path: /healthz
Nov 15 02:16:23 af867b kubelet[27751]: I1115 02:16:23.617410 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:23 af867b kubelet[27751]: I1115 02:16:23.630597 27751 http.go:96] Probe succeeded for https://127.0.0.1:6443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Type:[text/plain;charset=utf-8] Content-Length:[2] Date:[Wed, 15 Nov 2017 02:16:23 GMT]] 0xc4215296a0 2 [] false false map[] 0xc420ee3500 0xc42140e840}
Nov 15 02:16:23 af867b kubelet[27751]: I1115 02:16:23.630643 27751 prober.go:113] Liveness probe for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d):kube-apiserver" succeeded
Nov 15 02:16:23 af867b kubelet[27751]: I1115 02:16:23.834501 27751 prober.go:160] HTTP-Probe Host: http://10.32.0.2, Port: 10054, Path: /healthcheck/dnsmasq
Nov 15 02:16:23 af867b kubelet[27751]: I1115 02:16:23.834541 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:23 af867b kubelet[27751]: I1115 02:16:23.835543 27751 http.go:96] Probe succeeded for http://10.32.0.2:10054/healthcheck/dnsmasq, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Lengt:[51] Content-Type:[application/json] Date:[Wed, 15 Nov 2017 02:16:23 GMT]] 0xc4214f7fa0 51 [] true false map[] 0xc420d4b700 <nil>}
Nov 15 02:16:23 af867b kubelet[27751]: I1115 02:16:23.835592 27751 prober.go:113] Liveness probe for "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242):dnsmasq" succeede
Nov 15 02:16:24 af867b kubelet[27751]: I1115 02:16:24.504617 27751 prober.go:160] HTTP-Probe Host: http://10.32.0.2, Port: 10054, Path: /metrics
Nov 15 02:16:24 af867b kubelet[27751]: I1115 02:16:24.504654 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:24 af867b kubelet[27751]: I1115 02:16:24.510100 27751 http.go:96] Probe succeeded for http://10.32.0.2:10054/metrics, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Type:[text/plain;version=0.0.4] Date:[Wed, 15 Nov 2017 02:16:24 GMT]] 0xc4213588e0 -1 [] true true map[] 0xc420ee3b00 <nil>}
Nov 15 02:16:24 af867b kubelet[27751]: I1115 02:16:24.510163 27751 prober.go:113] Liveness probe for "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242):sidecar" succeede
Nov 15 02:16:24 af867b kubelet[27751]: I1115 02:16:24.523449 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:16:25 af867b kubelet[27751]: I1115 02:16:25.362734 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 6784, Path: /status
Nov 15 02:16:25 af867b kubelet[27751]: I1115 02:16:25.362783 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:25 af867b kubelet[27751]: I1115 02:16:25.365672 27751 http.go:96] Probe succeeded for http://127.0.0.1:6784/status, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Length:[445] Conten-Type:[text/plain; charset=utf-8] Date:[Wed, 15 Nov 2017 02:16:25 GMT]] 0xc420aa7dc0 445 [] true false map[] 0xc4211a8700 <nil>}
Nov 15 02:16:25 af867b kubelet[27751]: I1115 02:16:25.365740 27751 prober.go:113] Liveness probe for "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242):weave" succeeded
Nov 15 02:16:26 af867b kubelet[27751]: I1115 02:16:26.216297 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: mesage:
Nov 15 02:16:26 af867b kubelet[27751]: I1115 02:16:26.373008 27751 eviction_manager.go:221] eviction manager: synchronize housekeeping
Nov 15 02:16:26 af867b kubelet[27751]: I1115 02:16:26.431639 27751 helpers.go:871] eviction manager: observations: signal=nodefs.inodesFree, available: 10384476, capacity: 10208Ki, time: 2017-11-5 02:16:24.845571562 +0000 UTC
Nov 15 02:16:26 af867b kubelet[27751]: I1115 02:16:26.431677 27751 helpers.go:871] eviction manager: observations: signal=imagefs.available, available: 39632Mi, capacity: 45Gi, time: 2017-11-15 0:16:24.845571562 +0000 UTC
Nov 15 02:16:26 af867b kubelet[27751]: I1115 02:16:26.431689 27751 helpers.go:873] eviction manager: observations: signal=allocatableMemory.available, available: 6884156Ki, capacity: 7393360Ki
Nov 15 02:16:26 af867b kubelet[27751]: I1115 02:16:26.431697 27751 helpers.go:873] eviction manager: observations: signal=allocatableNodeFs.available, available: 9624040228, capacity: 10198Mi
Nov 15 02:16:26 af867b kubelet[27751]: I1115 02:16:26.431704 27751 helpers.go:871] eviction manager: observations: signal=memory.available, available: 5609352Ki, capacity: 7393360Ki, time: 2017-1-15 02:16:24.845571562 +0000 UTC
Nov 15 02:16:26 af867b kubelet[27751]: I1115 02:16:26.432749 27751 helpers.go:871] eviction manager: observations: signal=nodefs.available, available: 7399676Ki, capacity: 10198Mi, time: 2017-11-5 02:16:24.845571562 +0000 UTC
Nov 15 02:16:26 af867b kubelet[27751]: I1115 02:16:26.432776 27751 eviction_manager.go:325] eviction manager: no resources are starved
Nov 15 02:16:26 af867b kubelet[27751]: I1115 02:16:26.523494 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:16:27 af867b kubelet[27751]: I1115 02:16:27.599530 27751 prober.go:160] HTTP-Probe Host: http://10.32.0.2, Port: 8081, Path: /readiness
Nov 15 02:16:27 af867b kubelet[27751]: I1115 02:16:27.599566 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:27 af867b kubelet[27751]: I1115 02:16:27.601375 27751 http.go:96] Probe succeeded for http://10.32.0.2:8081/readiness, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 0:16:27 GMT] Content-Length:[3] Content-Type:[text/plain; charset=utf-8]] 0xc421bf2a80 3 [] true false map[] 0xc42110a600 <nil>}
Nov 15 02:16:27 af867b kubelet[27751]: I1115 02:16:27.601416 27751 prober.go:113] Readiness probe for "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242):kubedns" succeedd
Nov 15 02:16:27 af867b kubelet[27751]: I1115 02:16:27.681665 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10251, Path: /healthz
Nov 15 02:16:27 af867b kubelet[27751]: I1115 02:16:27.681703 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:27 af867b kubelet[27751]: I1115 02:16:27.683017 27751 http.go:96] Probe succeeded for http://127.0.0.1:10251/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 0216:27 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc421bf2e60 2 [] true false map[] 0xc42110ab00 <nil>}
Nov 15 02:16:27 af867b kubelet[27751]: I1115 02:16:27.683056 27751 prober.go:113] Liveness probe for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373):kube-scheduler" succeeded
Nov 15 02:16:28 af867b kubelet[27751]: I1115 02:16:28.523439 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:16:29 af867b kubelet[27751]: I1115 02:16:29.214125 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10252, Path: /healthz
Nov 15 02:16:29 af867b kubelet[27751]: I1115 02:16:29.214182 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:29 af867b kubelet[27751]: I1115 02:16:29.217451 27751 http.go:96] Probe succeeded for http://127.0.0.1:10252/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Length:[2] Conten-Type:[text/plain; charset=utf-8] Date:[Wed, 15 Nov 2017 02:16:29 GMT]] 0xc42203fd80 2 [] true false map[] 0xc420b08700 <nil>}
Nov 15 02:16:29 af867b kubelet[27751]: I1115 02:16:29.217500 27751 prober.go:113] Liveness probe for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9):kube-controller-anager" succeeded
Nov 15 02:16:30 af867b kubelet[27751]: I1115 02:16:30.523436 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:16:31 af867b kubelet[27751]: I1115 02:16:31.217902 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: mesage:
Nov 15 02:16:32 af867b kubelet[27751]: I1115 02:16:32.523520 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:16:32 af867b kubelet[27751]: I1115 02:16:32.959955 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 2379, Path: /health
Nov 15 02:16:32 af867b kubelet[27751]: I1115 02:16:32.960000 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:33 af867b kubelet[27751]: I1115 02:16:33.197613 27751 prober.go:160] HTTP-Probe Host: http://10.32.0.2, Port: 10054, Path: /healthcheck/kubedns
Nov 15 02:16:33 af867b kubelet[27751]: I1115 02:16:33.197659 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:33 af867b kubelet[27751]: I1115 02:16:33.198672 27751 http.go:96] Probe succeeded for http://10.32.0.2:10054/healthcheck/kubedns, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15Nov 2017 02:16:33 GMT] Content-Length:[51] Content-Type:[application/json]] 0xc421176480 51 [] true false map[] 0xc420c90b00 <nil>}
Nov 15 02:16:33 af867b kubelet[27751]: I1115 02:16:33.198745 27751 prober.go:113] Liveness probe for "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242):kubedns" succeede
Nov 15 02:16:33 af867b kubelet[27751]: I1115 02:16:33.462133 27751 http.go:96] Probe succeeded for http://127.0.0.1:2379/health, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:1:33 GMT] Content-Length:[18] Content-Type:[text/plain; charset=utf-8]] 0xc421fe56c0 18 [] true false map[] 0xc420c90400 <nil>}
Nov 15 02:16:33 af867b kubelet[27751]: I1115 02:16:33.462212 27751 prober.go:113] Liveness probe for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250):etcd" succeeded
Nov 15 02:16:33 af867b kubelet[27751]: I1115 02:16:33.617411 27751 prober.go:160] HTTP-Probe Host: https://127.0.0.1, Port: 6443, Path: /healthz
Nov 15 02:16:33 af867b kubelet[27751]: I1115 02:16:33.617459 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:33 af867b kubelet[27751]: I1115 02:16:33.624589 27751 http.go:96] Probe succeeded for https://127.0.0.1:6443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Length:[2] Date:[ed, 15 Nov 2017 02:16:33 GMT] Content-Type:[text/plain; charset=utf-8]] 0xc421177a60 2 [] false false map[] 0xc42110bf00 0xc421f1ed10}
Nov 15 02:16:33 af867b kubelet[27751]: I1115 02:16:33.624640 27751 prober.go:113] Liveness probe for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d):kube-apiserver" succeeded
Nov 15 02:16:33 af867b kubelet[27751]: I1115 02:16:33.834514 27751 prober.go:160] HTTP-Probe Host: http://10.32.0.2, Port: 10054, Path: /healthcheck/dnsmasq
Nov 15 02:16:33 af867b kubelet[27751]: I1115 02:16:33.834562 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:33 af867b kubelet[27751]: I1115 02:16:33.836107 27751 http.go:96] Probe succeeded for http://10.32.0.2:10054/healthcheck/dnsmasq, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Lengt:[51] Content-Type:[application/json] Date:[Wed, 15 Nov 2017 02:16:33 GMT]] 0xc421fe59c0 51 [] true false map[] 0xc420c91100 <nil>}
Nov 15 02:16:33 af867b kubelet[27751]: I1115 02:16:33.836170 27751 prober.go:113] Liveness probe for "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242):dnsmasq" succeede
Nov 15 02:16:34 af867b kubelet[27751]: I1115 02:16:34.504612 27751 prober.go:160] HTTP-Probe Host: http://10.32.0.2, Port: 10054, Path: /metrics
Nov 15 02:16:34 af867b kubelet[27751]: I1115 02:16:34.504656 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:34 af867b kubelet[27751]: I1115 02:16:34.509868 27751 http.go:96] Probe succeeded for http://10.32.0.2:10054/metrics, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Type:[text/plain;version=0.0.4] Date:[Wed, 15 Nov 2017 02:16:34 GMT]] 0xc421a21300 -1 [] true true map[] 0xc420c91600 <nil>}
Nov 15 02:16:34 af867b kubelet[27751]: I1115 02:16:34.509915 27751 prober.go:113] Liveness probe for "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242):sidecar" succeede
Nov 15 02:16:34 af867b kubelet[27751]: I1115 02:16:34.523469 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:16:35 af867b kubelet[27751]: I1115 02:16:35.362813 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 6784, Path: /status
Nov 15 02:16:35 af867b kubelet[27751]: I1115 02:16:35.362889 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:35 af867b kubelet[27751]: I1115 02:16:35.364510 27751 http.go:96] Probe succeeded for http://127.0.0.1:6784/status, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:1:35 GMT] Content-Length:[445] Content-Type:[text/plain; charset=utf-8]] 0xc421a685a0 445 [] true false map[] 0xc420c91b00 <nil>}
Nov 15 02:16:35 af867b kubelet[27751]: I1115 02:16:35.364570 27751 prober.go:113] Liveness probe for "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242):weave" succeeded
Nov 15 02:16:36 af867b kubelet[27751]: I1115 02:16:36.219654 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: mesage:
Nov 15 02:16:36 af867b kubelet[27751]: I1115 02:16:36.432958 27751 eviction_manager.go:221] eviction manager: synchronize housekeeping
Nov 15 02:16:36 af867b kubelet[27751]: I1115 02:16:36.481409 27751 helpers.go:871] eviction manager: observations: signal=memory.available, available: 5609352Ki, capacity: 7393360Ki, time: 2017-1-15 02:16:24.845571562 +0000 UTC
Nov 15 02:16:36 af867b kubelet[27751]: I1115 02:16:36.481459 27751 helpers.go:871] eviction manager: observations: signal=nodefs.available, available: 7399676Ki, capacity: 10198Mi, time: 2017-11-5 02:16:24.845571562 +0000 UTC
Nov 15 02:16:36 af867b kubelet[27751]: I1115 02:16:36.481470 27751 helpers.go:871] eviction manager: observations: signal=nodefs.inodesFree, available: 10384476, capacity: 10208Ki, time: 2017-11-5 02:16:24.845571562 +0000 UTC
Nov 15 02:16:36 af867b kubelet[27751]: I1115 02:16:36.481479 27751 helpers.go:871] eviction manager: observations: signal=imagefs.available, available: 39632Mi, capacity: 45Gi, time: 2017-11-15 0:16:24.845571562 +0000 UTC
Nov 15 02:16:36 af867b kubelet[27751]: I1115 02:16:36.481488 27751 helpers.go:873] eviction manager: observations: signal=allocatableMemory.available, available: 6884104Ki, capacity: 7393360Ki
Nov 15 02:16:36 af867b kubelet[27751]: I1115 02:16:36.481496 27751 helpers.go:873] eviction manager: observations: signal=allocatableNodeFs.available, available: 9624040228, capacity: 10198Mi
Nov 15 02:16:36 af867b kubelet[27751]: I1115 02:16:36.481518 27751 eviction_manager.go:325] eviction manager: no resources are starved
Nov 15 02:16:36 af867b kubelet[27751]: I1115 02:16:36.523444 27751 kubelet.go:1890] SyncLoop (SYNC): 1 pods; etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)
Nov 15 02:16:36 af867b kubelet[27751]: I1115 02:16:36.523506 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:16:36 af867b kubelet[27751]: I1115 02:16:36.525996 27751 kubelet_pods.go:1284] Generating status for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 02:16:36 af867b kubelet[27751]: I1115 02:16:36.526365 27751 status_manager.go:325] Ignoring same status for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)", status: {Phase:unning Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:59 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-1-15 01:58:55 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 01:58:55 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:etcd Stte:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 01:58:58 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Imag:gcr.io/google_containers/etcd-amd64:3.0.17 ImageID:docker-pullable://gcr.io/google_containers/etcd-amd64@sha256:d83d3545e06fb035db8512e33bd44afb55dea007a3abd7b17742d3ac6d235940 ContainerID:docker:/ab147bb1b65b2001333417bb7654896e6aadb25ce71a8c48c94ae802a2e0197f}] QOSClass:BestEffort}
Nov 15 02:16:36 af867b kubelet[27751]: I1115 02:16:36.526694 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250"
Nov 15 02:16:36 af867b kubelet[27751]: I1115 02:16:36.827493 27751 volume_manager.go:366] All volumes are attached and mounted for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 02:16:36 af867b kubelet[27751]: I1115 02:16:36.827767 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:d5076ad0e9fb270d1b8c4ff7cdadbf32db1e30dc42ae24dfbc5cb01bb5aa934 Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250)"
Nov 15 02:16:37 af867b kubelet[27751]: I1115 02:16:37.599607 27751 prober.go:160] HTTP-Probe Host: http://10.32.0.2, Port: 8081, Path: /readiness
Nov 15 02:16:37 af867b kubelet[27751]: I1115 02:16:37.599675 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:37 af867b kubelet[27751]: I1115 02:16:37.601397 27751 http.go:96] Probe succeeded for http://10.32.0.2:8081/readiness, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 0:16:37 GMT] Content-Length:[3] Content-Type:[text/plain; charset=utf-8]] 0xc4210a26e0 3 [] true false map[] 0xc421179600 <nil>}
Nov 15 02:16:37 af867b kubelet[27751]: I1115 02:16:37.601454 27751 prober.go:113] Readiness probe for "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242):kubedns" succeedd
Nov 15 02:16:37 af867b kubelet[27751]: I1115 02:16:37.681650 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10251, Path: /healthz
Nov 15 02:16:37 af867b kubelet[27751]: I1115 02:16:37.681687 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:37 af867b kubelet[27751]: I1115 02:16:37.682968 27751 http.go:96] Probe succeeded for http://127.0.0.1:10251/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 0216:37 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc4210a2840 2 [] true false map[] 0xc421179900 <nil>}
Nov 15 02:16:37 af867b kubelet[27751]: I1115 02:16:37.683016 27751 prober.go:113] Liveness probe for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373):kube-scheduler" succeeded
Nov 15 02:16:38 af867b kubelet[27751]: I1115 02:16:38.523481 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:16:39 af867b kubelet[27751]: I1115 02:16:39.214109 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10252, Path: /healthz
Nov 15 02:16:39 af867b kubelet[27751]: I1115 02:16:39.214176 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:39 af867b kubelet[27751]: I1115 02:16:39.216105 27751 http.go:96] Probe succeeded for http://127.0.0.1:10252/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Length:[2] Conten-Type:[text/plain; charset=utf-8] Date:[Wed, 15 Nov 2017 02:16:39 GMT]] 0xc420db3d00 2 [] true false map[] 0xc420b09400 <nil>}
Nov 15 02:16:39 af867b kubelet[27751]: I1115 02:16:39.216194 27751 prober.go:113] Liveness probe for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9):kube-controller-anager" succeeded
Nov 15 02:16:39 af867b kubelet[27751]: I1115 02:16:39.524766 27751 kubelet.go:1890] SyncLoop (SYNC): 1 pods; kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)
Nov 15 02:16:39 af867b kubelet[27751]: I1115 02:16:39.524874 27751 kubelet_pods.go:1284] Generating status for "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:16:39 af867b kubelet[27751]: I1115 02:16:39.525057 27751 status_manager.go:325] Ignoring same status for pod "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)", status {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:59:18 +0000 UTC Reason: Message:} {Type:Ready Status:True LastrobeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:59:20 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTie:2017-11-15 01:59:20 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 01:59:18 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Nam:kube-proxy State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 01:59:20 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestrtCount:0 Image:gcr.io/google_containers/kube-proxy-amd64:v1.8.3 ImageID:docker-pullable://gcr.io/google_containers/kube-proxy-amd64@sha256:63210bc9690144d41126a646caf03a3d76ddc6d06b8bad119d468193ce90c24 ContainerID:docker://7ea397ec6048e25ce044b9edba43fe0ef2ed54803c9f3516c7b6e780d2c60a34}] QOSClass:BestEffort}
Nov 15 02:16:39 af867b kubelet[27751]: I1115 02:16:39.525234 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b53eac242)"
Nov 15 02:16:39 af867b kubelet[27751]: I1115 02:16:39.541677 27751 configmap.go:187] Setting up volume kube-proxy for pod 9729c03a-c9a8-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/9729c03a-c98-11e7-89f4-c6b053eac242/volumes/kubernetes.io~configmap/kube-proxy
Nov 15 02:16:39 af867b kubelet[27751]: I1115 02:16:39.541959 27751 secret.go:186] Setting up volume kube-proxy-token-gqhfs for pod 9729c03a-c9a8-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/979c03a-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/kube-proxy-token-gqhfs
Nov 15 02:16:39 af867b kubelet[27751]: I1115 02:16:39.544543 27751 secret.go:217] Received secret kube-system/kube-proxy-token-gqhfs containing (3) pieces of data, 1904 total bytes
Nov 15 02:16:39 af867b kubelet[27751]: I1115 02:16:39.544605 27751 configmap.go:218] Received configMap kube-system/kube-proxy containing (1) pieces of data, 407 total bytes
Nov 15 02:16:39 af867b kubelet[27751]: I1115 02:16:39.544712 27751 atomic_writer.go:142] pod kube-system/kube-proxy-nnsjf volume kube-proxy-token-gqhfs: no update required for target directory /vr/lib/kubelet/pods/9729c03a-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/kube-proxy-token-gqhfs
Nov 15 02:16:39 af867b kubelet[27751]: I1115 02:16:39.544943 27751 atomic_writer.go:142] pod kube-system/kube-proxy-nnsjf volume kube-proxy: no update required for target directory /var/lib/kubelt/pods/9729c03a-c9a8-11e7-89f4-c6b053eac242/volumes/kubernetes.io~configmap/kube-proxy
Nov 15 02:16:39 af867b kubelet[27751]: I1115 02:16:39.825440 27751 volume_manager.go:366] All volumes are attached and mounted for pod "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053ac242)"
Nov 15 02:16:39 af867b kubelet[27751]: I1115 02:16:39.825571 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:91510886f3beb621e5d04309d502352ff8392e46405f21633802da4f7047069 Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "kube-proxy-nnsjf_kube-system(9729c03a-c9a8-11e7-89f4-c6b053eac242)"
Nov 15 02:16:40 af867b kubelet[27751]: I1115 02:16:40.523926 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:16:41 af867b kubelet[27751]: I1115 02:16:41.221358 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: mesage:
Nov 15 02:16:41 af867b kubelet[27751]: I1115 02:16:41.523502 27751 kubelet.go:1890] SyncLoop (SYNC): 1 pods; kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)
Nov 15 02:16:41 af867b kubelet[27751]: I1115 02:16:41.523664 27751 kubelet_pods.go:1284] Generating status for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 02:16:41 af867b kubelet[27751]: I1115 02:16:41.524038 27751 status_manager.go:325] Ignoring same status for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)", statu: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LasProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 01:58:58 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTme:2017-11-15 01:58:55 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 01:58:55 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Nae:kube-scheduler State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 01:58:57 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:trueRestartCount:0 Image:gcr.io/google_containers/kube-scheduler-amd64:v1.8.3 ImageID:docker-pullable://gcr.io/google_containers/kube-scheduler-amd64@sha256:c47b2438bbab28d58e8cbf64b37b7f66d26b000f5c3a1626ee829a4be8fb91e ContainerID:docker://413566b22305750f9a9aa46fbe256c11e75293e80f6e0d4afb7ec9e6afcdee05}] QOSClass:Burstable}
Nov 15 02:16:41 af867b kubelet[27751]: I1115 02:16:41.524406 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217fd5c14373)"
Nov 15 02:16:41 af867b kubelet[27751]: I1115 02:16:41.824722 27751 volume_manager.go:366] All volumes are attached and mounted for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfdc14373)"
Nov 15 02:16:41 af867b kubelet[27751]: I1115 02:16:41.824897 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:fe18d337ce42bd3a4d2aacb1349be02824681d8073fbe5d6377946e815fa810 Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373)"
Nov 15 02:16:42 af867b kubelet[27751]: I1115 02:16:42.523478 27751 kubelet.go:1890] SyncLoop (SYNC): 1 pods; kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)
Nov 15 02:16:42 af867b kubelet[27751]: I1115 02:16:42.523569 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:16:42 af867b kubelet[27751]: I1115 02:16:42.526100 27751 kubelet_pods.go:1284] Generating status for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9)"
Nov 15 02:16:42 af867b kubelet[27751]: I1115 02:16:42.526503 27751 status_manager.go:325] Ignoring same status for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:12:51 +0000 UTC Reason: Message:} {Type:Ready StatusTrue LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:12:52 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTrnsitionTime:2017-11-15 02:12:51 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP:10.196.65.210 StartTime:2017-11-15 02:12:51 +0000 UTC InitContainerStatuses:[] ContainerStatses:[{Name:kube-controller-manager State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2017-11-15 02:12:52 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminatd:nil} Ready:true RestartCount:0 Image:gcr.io/google_containers/kube-controller-manager-amd64:v1.8.3 ImageID:docker-pullable://gcr.io/google_containers/kube-controller-manager-amd64@sha256:b6b633e3107761d38fceb200f01bf552c51f65e3524b0aafc1a7710afff07be ContainerID:docker://5b8a6b9d2792044cc30bacf05707077a8f7d5d3b7c1ff35931c96f933cf41f6e}] QOSClass:Burstable}
Nov 15 02:16:42 af867b kubelet[27751]: I1115 02:16:42.526886 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66a63a0b4bcea4f69baf9)"
Nov 15 02:16:42 af867b kubelet[27751]: I1115 02:16:42.827260 27751 volume_manager.go:366] All volumes are attached and mounted for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af630b4bcea4f69baf9)"
Nov 15 02:16:42 af867b kubelet[27751]: I1115 02:16:42.827446 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:false CreateSandbox:false SandboxID:a05e8c51434768693d26caf18b1a9774240b6197e992c2a2edcd1a9cb3b597f Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69ba9)"
Nov 15 02:16:42 af867b kubelet[27751]: I1115 02:16:42.959972 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 2379, Path: /health
Nov 15 02:16:42 af867b kubelet[27751]: I1115 02:16:42.960021 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:43 af867b kubelet[27751]: I1115 02:16:43.197546 27751 prober.go:160] HTTP-Probe Host: http://10.32.0.2, Port: 10054, Path: /healthcheck/kubedns
Nov 15 02:16:43 af867b kubelet[27751]: I1115 02:16:43.197578 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:43 af867b kubelet[27751]: I1115 02:16:43.199539 27751 http.go:96] Probe succeeded for http://10.32.0.2:10054/healthcheck/kubedns, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Type:application/json] Date:[Wed, 15 Nov 2017 02:16:43 GMT] Content-Length:[51]] 0xc421438c80 51 [] true false map[] 0xc420ee3100 <nil>}
Nov 15 02:16:43 af867b kubelet[27751]: I1115 02:16:43.199632 27751 prober.go:113] Liveness probe for "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242):kubedns" succeede
Nov 15 02:16:43 af867b kubelet[27751]: I1115 02:16:43.461912 27751 http.go:96] Probe succeeded for http://127.0.0.1:2379/health, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:1:43 GMT] Content-Length:[18] Content-Type:[text/plain; charset=utf-8]] 0xc421439ca0 18 [] true false map[] 0xc420d4b600 <nil>}
Nov 15 02:16:43 af867b kubelet[27751]: I1115 02:16:43.461996 27751 prober.go:113] Liveness probe for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250):etcd" succeeded
Nov 15 02:16:43 af867b kubelet[27751]: I1115 02:16:43.617390 27751 prober.go:160] HTTP-Probe Host: https://127.0.0.1, Port: 6443, Path: /healthz
Nov 15 02:16:43 af867b kubelet[27751]: I1115 02:16:43.617446 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:43 af867b kubelet[27751]: I1115 02:16:43.625673 27751 http.go:96] Probe succeeded for https://127.0.0.1:6443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Type:[text/plain;charset=utf-8] Content-Length:[2] Date:[Wed, 15 Nov 2017 02:16:43 GMT]] 0xc42138a3a0 2 [] false false map[] 0xc420ee3400 0xc4218f0b00}
Nov 15 02:16:43 af867b kubelet[27751]: I1115 02:16:43.625737 27751 prober.go:113] Liveness probe for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d):kube-apiserver" succeeded
Nov 15 02:16:43 af867b kubelet[27751]: I1115 02:16:43.834536 27751 prober.go:160] HTTP-Probe Host: http://10.32.0.2, Port: 10054, Path: /healthcheck/dnsmasq
Nov 15 02:16:43 af867b kubelet[27751]: I1115 02:16:43.834576 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:43 af867b kubelet[27751]: I1115 02:16:43.836154 27751 http.go:96] Probe succeeded for http://10.32.0.2:10054/healthcheck/dnsmasq, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Lengt:[51] Content-Type:[application/json] Date:[Wed, 15 Nov 2017 02:16:43 GMT]] 0xc42138a4e0 51 [] true false map[] 0xc420d4bd00 <nil>}
Nov 15 02:16:43 af867b kubelet[27751]: I1115 02:16:43.836204 27751 prober.go:113] Liveness probe for "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242):dnsmasq" succeede
Nov 15 02:16:44 af867b kubelet[27751]: I1115 02:16:44.504751 27751 prober.go:160] HTTP-Probe Host: http://10.32.0.2, Port: 10054, Path: /metrics
Nov 15 02:16:44 af867b kubelet[27751]: I1115 02:16:44.504800 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:44 af867b kubelet[27751]: I1115 02:16:44.516940 27751 http.go:96] Probe succeeded for http://10.32.0.2:10054/metrics, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Type:[text/plain;version=0.0.4] Date:[Wed, 15 Nov 2017 02:16:44 GMT]] 0xc42138bda0 -1 [] true true map[] 0xc420afc000 <nil>}
Nov 15 02:16:44 af867b kubelet[27751]: I1115 02:16:44.516987 27751 prober.go:113] Liveness probe for "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242):sidecar" succeede
Nov 15 02:16:44 af867b kubelet[27751]: I1115 02:16:44.523546 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:16:45 af867b kubelet[27751]: I1115 02:16:45.362705 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 6784, Path: /status
Nov 15 02:16:45 af867b kubelet[27751]: I1115 02:16:45.362737 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:45 af867b kubelet[27751]: I1115 02:16:45.364744 27751 http.go:96] Probe succeeded for http://127.0.0.1:6784/status, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Type:[text/plain; carset=utf-8] Date:[Wed, 15 Nov 2017 02:16:45 GMT] Content-Length:[445]] 0xc42014e320 445 [] true false map[] 0xc420afd300 <nil>}
Nov 15 02:16:45 af867b kubelet[27751]: I1115 02:16:45.364783 27751 prober.go:113] Liveness probe for "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242):weave" succeeded
Nov 15 02:16:46 af867b kubelet[27751]: I1115 02:16:46.223433 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: mesage:
Nov 15 02:16:46 af867b kubelet[27751]: I1115 02:16:46.481737 27751 eviction_manager.go:221] eviction manager: synchronize housekeeping
Nov 15 02:16:46 af867b kubelet[27751]: I1115 02:16:46.523809 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:16:46 af867b kubelet[27751]: I1115 02:16:46.547679 27751 helpers.go:871] eviction manager: observations: signal=memory.available, available: 5609296Ki, capacity: 7393360Ki, time: 2017-1-15 02:16:41.962872214 +0000 UTC
Nov 15 02:16:46 af867b kubelet[27751]: I1115 02:16:46.547712 27751 helpers.go:871] eviction manager: observations: signal=nodefs.available, available: 7399696Ki, capacity: 10198Mi, time: 2017-11-5 02:16:41.962872214 +0000 UTC
Nov 15 02:16:46 af867b kubelet[27751]: I1115 02:16:46.547722 27751 helpers.go:871] eviction manager: observations: signal=nodefs.inodesFree, available: 10384476, capacity: 10208Ki, time: 2017-11-5 02:16:41.962872214 +0000 UTC
Nov 15 02:16:46 af867b kubelet[27751]: I1115 02:16:46.547730 27751 helpers.go:871] eviction manager: observations: signal=imagefs.available, available: 39632Mi, capacity: 45Gi, time: 2017-11-15 0:16:41.962872214 +0000 UTC
Nov 15 02:16:46 af867b kubelet[27751]: I1115 02:16:46.547738 27751 helpers.go:873] eviction manager: observations: signal=allocatableMemory.available, available: 6884076Ki, capacity: 7393360Ki
Nov 15 02:16:46 af867b kubelet[27751]: I1115 02:16:46.547746 27751 helpers.go:873] eviction manager: observations: signal=allocatableNodeFs.available, available: 9624040228, capacity: 10198Mi
Nov 15 02:16:46 af867b kubelet[27751]: I1115 02:16:46.547763 27751 eviction_manager.go:325] eviction manager: no resources are starved
Nov 15 02:16:47 af867b kubelet[27751]: I1115 02:16:47.600763 27751 prober.go:160] HTTP-Probe Host: http://10.32.0.2, Port: 8081, Path: /readiness
Nov 15 02:16:47 af867b kubelet[27751]: I1115 02:16:47.600802 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:47 af867b kubelet[27751]: I1115 02:16:47.624474 27751 http.go:96] Probe succeeded for http://10.32.0.2:8081/readiness, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 0:16:47 GMT] Content-Length:[3] Content-Type:[text/plain; charset=utf-8]] 0xc421c3afe0 3 [] true false map[] 0xc420a30900 <nil>}
Nov 15 02:16:47 af867b kubelet[27751]: I1115 02:16:47.624533 27751 prober.go:113] Readiness probe for "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242):kubedns" succeedd
Nov 15 02:16:47 af867b kubelet[27751]: I1115 02:16:47.681909 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10251, Path: /healthz
Nov 15 02:16:47 af867b kubelet[27751]: I1115 02:16:47.681955 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:47 af867b kubelet[27751]: I1115 02:16:47.685904 27751 http.go:96] Probe succeeded for http://127.0.0.1:10251/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 0216:47 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc421c52e00 2 [] true false map[] 0xc420a30f00 <nil>}
Nov 15 02:16:47 af867b kubelet[27751]: I1115 02:16:47.685954 27751 prober.go:113] Liveness probe for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373):kube-scheduler" succeeded
Nov 15 02:16:48 af867b kubelet[27751]: I1115 02:16:48.525895 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:16:49 af867b kubelet[27751]: I1115 02:16:49.216783 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10252, Path: /healthz
Nov 15 02:16:49 af867b kubelet[27751]: I1115 02:16:49.216815 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:49 af867b kubelet[27751]: I1115 02:16:49.219907 27751 http.go:96] Probe succeeded for http://127.0.0.1:10252/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Type:[text/plain;charset=utf-8] Date:[Wed, 15 Nov 2017 02:16:49 GMT] Content-Length:[2]] 0xc4221a18c0 2 [] true false map[] 0xc420b08f00 <nil>}
Nov 15 02:16:49 af867b kubelet[27751]: I1115 02:16:49.219947 27751 prober.go:113] Liveness probe for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9):kube-controller-anager" succeeded
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.526372 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.553383 27751 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.750906 27751 kubelet.go:1222] Container garbage collection succeeded
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.831942 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-29c1400e918b28b992788b4c8b07c46b1767efcd8b96000e44bc78da639994b.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.831987 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-29c1400e918b28b9982788b4c807c46b1767efcd8b96000e44bc78da639994b.mount", but ignoring.
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832001 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-29c1400e918b28b9982788b4c8b07c46b1767efcd8b9600e44bc78da639994b.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832011 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-faffc5ce0cf0da39c8377dff0a611eb436ee74dfce11692aecd1e723311ef80.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832020 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-faffc5ce0cf0da39c68377dff0611eb436ee74dfce11692aecd1e723311ef80.mount", but ignoring.
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832029 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-faffc5ce0cf0da39c68377dff0a611eb436ee74dfce1162aecd1e723311ef80.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832120 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/sys-kernel-debug.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832137 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/sys-kernel-debug.mount", but ignoring.
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832150 27751 manager.go:901] ignoring container "/system.slice/sys-kernel-debug.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832160 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-ffde707e35c082af581df3bb46e12f0eaf2bf20d3f28b499b6739ec31319af8.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832175 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-ffde707e35c082af5481df3bb4e12f0eaf2bf20d3f28b499b6739ec31319af8.mount", but ignoring.
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832190 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-ffde707e35c082af5481df3bb46e12f0eaf2bf20d3f28b99b6739ec31319af8.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832202 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832212 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper.mount", but ignoring.
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832225 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832272 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/boot.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832283 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/boot.mount", but ignoring.
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832291 27751 manager.go:901] ignoring container "/system.slice/boot.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832297 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/proc-xen.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832304 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/proc-xen.mount", but ignoring.
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832311 27751 manager.go:901] ignoring container "/system.slice/proc-xen.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832317 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/run-user-1000.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832323 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/run-user-1000.mount", but ignoring.
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832330 27751 manager.go:901] ignoring container "/system.slice/run-user-1000.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832337 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/sys-kernel-config.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832343 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/sys-kernel-config.mount", but ignoring.
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832351 27751 manager.go:901] ignoring container "/system.slice/sys-kernel-config.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832357 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-7d2ebdadb8c541a489058dd46b491ef9e14209821b47e7d86b539de402fbef5.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832365 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-7d2ebdadb8c541a4839058dd46491ef9e14209821b47e7d86b539de402fbef5.mount", but ignoring.
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832375 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-7d2ebdadb8c541a4839058dd46b491ef9e14209821b47ed86b539de402fbef5.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832383 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-9d97c593c8a0b1ef242048619ba4466fad52c408e686c9285a48450f3a6669f.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832391 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-9d97c593c8a0b1ef2442048619a4466fad52c408e686c9285a48450f3a6669f.mount", but ignoring.
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832401 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-9d97c593c8a0b1ef2442048619ba4466fad52c408e686c285a48450f3a6669f.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832408 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-kubelet-pods-9729c03a\\x2dc9a8\\x2d11e7\\2d89f4\\x2dc6b053eac242-volumes-kubernetes.io\\x7esecret-kube\\x2dproxy\\x2dtoken\\x2dgqhfs.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832430 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-kubelet-pods-9729c03a\\x2dc9a8\\x2d11e7\\x2d89f4\\2dc6b053eac242-volumes-kubernetes.io\\x7esecret-kube\\x2dproxy\\x2dtoken\\x2dgqhfs.mount", but ignoring.
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832444 27751 manager.go:901] ignoring container "/system.slice/var-lib-kubelet-pods-9729c03a\\x2dc9a8\\x2d11e7\\x2d89f4\\x2dc6b053eac242-volues-kubernetes.io\\x7esecret-kube\\x2dproxy\\x2dtoken\\x2dgqhfs.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832454 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/dev-mqueue.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832460 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/dev-mqueue.mount", but ignoring.
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832468 27751 manager.go:901] ignoring container "/system.slice/dev-mqueue.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832473 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-cd27b64716d8032aaea595ec121a222dc749f2f903149a780eb711106fb0c53.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832482 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-cd27b64716d8032aa3ea595ec11a222dc749f2f903149a780eb711106fb0c53.mount", but ignoring.
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832491 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-cd27b64716d8032aa3ea595ec121a222dc749f2f903149780eb711106fb0c53.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832500 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/u01-applicationSpace.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832506 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/u01-applicationSpace.mount", but ignoring.
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832514 27751 manager.go:901] ignoring container "/system.slice/u01-applicationSpace.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832519 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-fe18d337ce42bd3a4d2aacb349be028274681d8073fbe5d6377946e815fa810-shm.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832527 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-fe18d337ce42bd3a4d2aacb1349be02874681d8073fbe5d6377946e815fa810-shm.mount", but ignoring.
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832538 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-fe18d337ce42bd3a4d2aacb1349be028274681d8073fbe5d637746e815fa810-shm.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832545 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-984a35c7cf6d04c0a5fcd260872a7b6556aa9cd8b38bc1a1cf635972c1d6511.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832554 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-984a35c7cf6d04c0af5fcd26082a7b6556aa9cd8b38bc1a1cf635972c1d6511.mount", but ignoring.
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832564 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-984a35c7cf6d04c0af5fcd260872a7b6556aa9cd8b38bca1cf635972c1d6511.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832572 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/-.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832578 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/-.mount", but ignoring.
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832585 27751 manager.go:901] ignoring container "/system.slice/-.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832591 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/dev-hugepages.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832598 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/dev-hugepages.mount", but ignoring.
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832606 27751 manager.go:901] ignoring container "/system.slice/dev-hugepages.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832612 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-91510886f3beb621e5d0430d502352ff78392e46405f21633802da4f7047069-shm.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832620 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-91510886f3beb621e5d04309d502352f78392e46405f21633802da4f7047069-shm.mount", but ignoring.
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832630 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-91510886f3beb621e5d04309d502352ff78392e46405f2163380da4f7047069-shm.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832638 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-19b06ca14052c17f92bf03d316fd7642fe4fee6bf193755575355be3e87a13d-shm.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832646 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-19b06ca14052c17f92bf03d5316fd764fe4fee6bf193755575355be3e87a13d-shm.mount", but ignoring.
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832655 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-19b06ca14052c17f92bf03d5316fd7642fe4fee6bf19375557535be3e87a13d-shm.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832663 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-23d8892c53722cfb43aa6c5b126ad2674a7b8041b4873d30b98faefad6f5b44.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832671 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-23d8892c53722cfb4d3aa6c5b16ad2674a7b8041b4873d30b98faefad6f5b44.mount", but ignoring.
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832680 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-23d8892c53722cfb4d3aa6c5b126ad2674a7b8041b487330b98faefad6f5b44.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832689 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-82408ec054eef4c0be3519663bc7e5cbaff7a4fea1d8b7490fa3cc5a74bac60.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832697 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-82408ec054eef4c0bce3519663c7e5cbaff7a4fea1d8b7490fa3cc5a74bac60.mount", but ignoring.
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832706 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-82408ec054eef4c0bce3519663bc7e5cbaff7a4fea1d8b490fa3cc5a74bac60.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832752 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-kubelet-pods-b77b0858\\x2dc9a8\\x2d11e7\\2d89f4\\x2dc6b053eac242-volumes-kubernetes.io\\x7esecret-weave\\x2dnet\\x2dtoken\\x2drn6j7.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832762 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-kubelet-pods-b77b0858\\x2dc9a8\\x2d11e7\\x2d89f4\\2dc6b053eac242-volumes-kubernetes.io\\x7esecret-weave\\x2dnet\\x2dtoken\\x2drn6j7.mount", but ignoring.
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832774 27751 manager.go:901] ignoring container "/system.slice/var-lib-kubelet-pods-b77b0858\\x2dc9a8\\x2d11e7\\x2d89f4\\x2dc6b053eac242-volues-kubernetes.io\\x7esecret-weave\\x2dnet\\x2dtoken\\x2drn6j7.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832784 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/run-docker-netns-default.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832791 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/run-docker-netns-default.mount", but ignoring.
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832800 27751 manager.go:901] ignoring container "/system.slice/run-docker-netns-default.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832806 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-be14b0552100f929052c8e9e1b7cf587c4218e321d811314e58f2cf312f1c0f-shm.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832814 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-be14b0552100f929052c8e94e1b7cf58c4218e321d811314e58f2cf312f1c0f-shm.mount", but ignoring.
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832823 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-be14b0552100f929052c8e94e1b7cf587c4218e321d811314e582cf312f1c0f-shm.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832831 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-d5076ad0e9fb270d1b8c4ffcdadbf32d6b1e30dc42ae24dfbc5cb01bb5aa934-shm.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832839 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-d5076ad0e9fb270d1b8c4ff7cdadbf326b1e30dc42ae24dfbc5cb01bb5aa934-shm.mount", but ignoring.
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832848 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-d5076ad0e9fb270d1b8c4ff7cdadbf32d6b1e30dc42ae24dfbc5b01bb5aa934-shm.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832856 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-9ba72245089e9b36e01a816a7446b004869c57dbf880db7daeb3edac0f81e51.mount"
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832864 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-9ba72245089e9b36e901a816a746b004869c57dbf880db7daeb3edac0f81e51.mount", but ignoring.
Nov 15 02:16:50 af867b kubelet[27751]: I1115 02:16:50.832874 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-9ba72245089e9b36e901a816a7446b004869c57dbf880d7daeb3edac0f81e51.mount"
Nov 15 02:16:51 af867b kubelet[27751]: I1115 02:16:51.227958 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: mesage:
Nov 15 02:16:52 af867b kubelet[27751]: I1115 02:16:52.523852 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:16:52 af867b kubelet[27751]: I1115 02:16:52.960894 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 2379, Path: /health
Nov 15 02:16:52 af867b kubelet[27751]: I1115 02:16:52.960939 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:53 af867b kubelet[27751]: I1115 02:16:53.197839 27751 prober.go:160] HTTP-Probe Host: http://10.32.0.2, Port: 10054, Path: /healthcheck/kubedns
Nov 15 02:16:53 af867b kubelet[27751]: I1115 02:16:53.197867 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:53 af867b kubelet[27751]: I1115 02:16:53.199321 27751 http.go:96] Probe succeeded for http://10.32.0.2:10054/healthcheck/kubedns, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Type:application/json] Date:[Wed, 15 Nov 2017 02:16:53 GMT] Content-Length:[51]] 0xc421144720 51 [] true false map[] 0xc4200dd600 <nil>}
Nov 15 02:16:53 af867b kubelet[27751]: I1115 02:16:53.199378 27751 prober.go:113] Liveness probe for "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242):kubedns" succeede
Nov 15 02:16:53 af867b kubelet[27751]: I1115 02:16:53.467882 27751 http.go:96] Probe succeeded for http://127.0.0.1:2379/health, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:1:53 GMT] Content-Length:[18] Content-Type:[text/plain; charset=utf-8]] 0xc42121c520 18 [] true false map[] 0xc4200dd000 <nil>}
Nov 15 02:16:53 af867b kubelet[27751]: I1115 02:16:53.467949 27751 prober.go:113] Liveness probe for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250):etcd" succeeded
Nov 15 02:16:53 af867b kubelet[27751]: I1115 02:16:53.618829 27751 prober.go:160] HTTP-Probe Host: https://127.0.0.1, Port: 6443, Path: /healthz
Nov 15 02:16:53 af867b kubelet[27751]: I1115 02:16:53.618869 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:53 af867b kubelet[27751]: I1115 02:16:53.626801 27751 http.go:96] Probe succeeded for https://127.0.0.1:6443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Date:[Wed, 15 Nov 2017 0216:53 GMT] Content-Type:[text/plain; charset=utf-8] Content-Length:[2]] 0xc421263880 2 [] false false map[] 0xc4200ddc00 0xc421f50580}
Nov 15 02:16:53 af867b kubelet[27751]: I1115 02:16:53.626841 27751 prober.go:113] Liveness probe for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d):kube-apiserver" succeeded
Nov 15 02:16:53 af867b kubelet[27751]: I1115 02:16:53.836175 27751 prober.go:160] HTTP-Probe Host: http://10.32.0.2, Port: 10054, Path: /healthcheck/dnsmasq
Nov 15 02:16:53 af867b kubelet[27751]: I1115 02:16:53.836238 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:53 af867b kubelet[27751]: I1115 02:16:53.837693 27751 http.go:96] Probe succeeded for http://10.32.0.2:10054/healthcheck/dnsmasq, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Type:application/json] Date:[Wed, 15 Nov 2017 02:16:53 GMT] Content-Length:[51]] 0xc4212e11c0 51 [] true false map[] 0xc420430c00 <nil>}
Nov 15 02:16:53 af867b kubelet[27751]: I1115 02:16:53.837759 27751 prober.go:113] Liveness probe for "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242):dnsmasq" succeede
Nov 15 02:16:54 af867b kubelet[27751]: I1115 02:16:54.504578 27751 prober.go:160] HTTP-Probe Host: http://10.32.0.2, Port: 10054, Path: /metrics
Nov 15 02:16:54 af867b kubelet[27751]: I1115 02:16:54.504617 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:54 af867b kubelet[27751]: I1115 02:16:54.511205 27751 http.go:96] Probe succeeded for http://10.32.0.2:10054/metrics, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Type:[text/plain;version=0.0.4] Date:[Wed, 15 Nov 2017 02:16:54 GMT]] 0xc420f3e180 -1 [] true true map[] 0xc420431a00 <nil>}
Nov 15 02:16:54 af867b kubelet[27751]: I1115 02:16:54.511247 27751 prober.go:113] Liveness probe for "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242):sidecar" succeede
Nov 15 02:16:54 af867b kubelet[27751]: I1115 02:16:54.526795 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:16:55 af867b kubelet[27751]: I1115 02:16:55.363904 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 6784, Path: /status
Nov 15 02:16:55 af867b kubelet[27751]: I1115 02:16:55.363945 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:55 af867b kubelet[27751]: I1115 02:16:55.368050 27751 http.go:96] Probe succeeded for http://127.0.0.1:6784/status, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:1:55 GMT] Content-Length:[445] Content-Type:[text/plain; charset=utf-8]] 0xc4218b97e0 445 [] true false map[] 0xc421178700 <nil>}
Nov 15 02:16:55 af867b kubelet[27751]: I1115 02:16:55.368111 27751 prober.go:113] Liveness probe for "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242):weave" succeeded
Nov 15 02:16:55 af867b kubelet[27751]: I1115 02:16:55.879324 27751 config.go:282] Setting pods for source api
Nov 15 02:16:55 af867b kubelet[27751]: I1115 02:16:55.880480 27751 config.go:404] Receiving a new pod "snginx-10_default(0d5631da-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:55 af867b kubelet[27751]: I1115 02:16:55.881055 27751 kubelet.go:1837] SyncLoop (ADD, "api"): "snginx-10_default(0d5631da-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:55 af867b kubelet[27751]: I1115 02:16:55.881195 27751 kubelet_pods.go:1284] Generating status for "snginx-10_default(0d5631da-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:55 af867b kubelet[27751]: I1115 02:16:55.882469 27751 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Nov 15 02:16:55 af867b kubelet[27751]: I1115 02:16:55.907077 27751 factory.go:116] Factory "docker" was unable to handle container "/kubepods/besteffort/pod0d5631da-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:16:55 af867b kubelet[27751]: I1115 02:16:55.907103 27751 factory.go:105] Error trying to work out if we can handle /kubepods/besteffort/pod0d5631da-c9ab-11e7-89f4-c6b053eac242: /kubepod/besteffort/pod0d5631da-c9ab-11e7-89f4-c6b053eac242 not handled by systemd handler
Nov 15 02:16:55 af867b kubelet[27751]: I1115 02:16:55.907141 27751 factory.go:116] Factory "systemd" was unable to handle container "/kubepods/besteffort/pod0d5631da-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:16:55 af867b kubelet[27751]: I1115 02:16:55.907149 27751 factory.go:112] Using factory "raw" for container "/kubepods/besteffort/pod0d5631da-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:16:55 af867b kubelet[27751]: I1115 02:16:55.908143 27751 manager.go:932] Added container: "/kubepods/besteffort/pod0d5631da-c9ab-11e7-89f4-c6b053eac242" (aliases: [], namespace: "")
Nov 15 02:16:55 af867b kubelet[27751]: I1115 02:16:55.908331 27751 handler.go:325] Added event &{/kubepods/besteffort/pod0d5631da-c9ab-11e7-89f4-c6b053eac242 2017-11-15 02:16:55.903194711 +0000 UC containerCreation {<nil>}}
Nov 15 02:16:55 af867b kubelet[27751]: I1115 02:16:55.908368 27751 container.go:409] Start housekeeping for container "/kubepods/besteffort/pod0d5631da-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:16:55 af867b kubelet[27751]: I1115 02:16:55.920067 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "snginx-10_default(0d5631da-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:55 af867b kubelet[27751]: I1115 02:16:55.968016 27751 config.go:282] Setting pods for source api
Nov 15 02:16:55 af867b kubelet[27751]: I1115 02:16:55.969524 27751 kubelet.go:1850] SyncLoop (RECONCILE, "api"): "snginx-10_default(0d5631da-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:55 af867b kubelet[27751]: I1115 02:16:55.973949 27751 status_manager.go:451] Status for pod "snginx-10_default(0d5631da-c9ab-11e7-89f4-c6b053eac242)" updated successfully: (1, {PhasePending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:16:55 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTie:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:16:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [nginx]} {Type:PodScheduled Status:True LastProbeTme:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:16:55 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP: StartTime:2017-11-15 02:16:55 +0000 UTC InitContainrStatuses:[] ContainerStatuses:[{Name:nginx State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Termiated:nil} Ready:false RestartCount:0 Image:nginx ImageID: ContainerID:}] QOSClass:BestEffort})
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.076998 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-qjbsf" (UniqueName: "kuberetes.io/secret/0d5631da-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-10" (UID: "0d5631da-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.178310 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "default-token-qjbsf" (UniqueName: "kubernetes.io/secret/0d531da-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-10" (UID: "0d5631da-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.178387 27751 secret.go:186] Setting up volume default-token-qjbsf for pod 0d5631da-c9ab-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/0d563da-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.178601 27751 empty_dir.go:264] pod 0d5631da-c9ab-11e7-89f4-c6b053eac242: mounting tmpfs for volume wrapped_default-token-qjbsf
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.178623 27751 mount_linux.go:135] Mounting cmd (systemd-run) with arguments ([--description=Kubernetes transient mount for /var/lib/kubelet/pos/0d5631da-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf --scope -- mount -t tmpfs tmpfs /var/lib/kubelet/pods/0d5631da-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetesio~secret/default-token-qjbsf])
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.191695 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/run-1306.scope"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.191764 27751 factory.go:105] Error trying to work out if we can handle /system.slice/run-1306.scope: /system.slice/run-1306.scope not handledby systemd handler
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.191773 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/run-1306.scope"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.191781 27751 factory.go:112] Using factory "raw" for container "/system.slice/run-1306.scope"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.191934 27751 manager.go:932] Added container: "/system.slice/run-1306.scope" (aliases: [], namespace: "")
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192037 27751 handler.go:325] Added event &{/system.slice/run-1306.scope 2017-11-15 02:16:56.185194711 +0000 UTC containerCreation {<nil>}}
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192070 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-8dc12d8756ef2ac44557ac1346841f65a298d1c068a94bca68af19724fac1b0.mount"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192083 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-8dc12d8756ef2ac445557ac134841f65a298d1c068a94bca68af19724fac1b0.mount", but ignoring.
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192094 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-8dc12d8756ef2ac445557ac1346841f65a298d1c068a94ca68af19724fac1b0.mount"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192104 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-4c898f767718d5bed6f7aa6de5dd63e3b65aa41665cb76b96b0e4e8bd126526.mount"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192112 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-4c898f767718d5bed36f7aa6dedd63e3b65aa41665cb76b96b0e4e8bd126526.mount", but ignoring.
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192122 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-4c898f767718d5bed36f7aa6de5dd63e3b65aa41665cb7b96b0e4e8bd126526.mount"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192131 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-bad218fed21ce57f7150c24da0486c9ae20ab13e29dc9dfbd021de166c11a8b.mount"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192139 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-bad218fed21ce57f77150c24da486c9ae20ab13e29dc9dfbd021de166c11a8b.mount", but ignoring.
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192149 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-bad218fed21ce57f77150c24da0486c9ae20ab13e29dc9fbd021de166c11a8b.mount"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192157 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-kubelet-pods-97270c63\\x2dc9a8\\x2d11e7\\2d89f4\\x2dc6b053eac242-volumes-kubernetes.io\\x7esecret-kube\\x2ddns\\x2dtoken\\x2d987zv.mount"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192167 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-kubelet-pods-97270c63\\x2dc9a8\\x2d11e7\\x2d89f4\\2dc6b053eac242-volumes-kubernetes.io\\x7esecret-kube\\x2ddns\\x2dtoken\\x2d987zv.mount", but ignoring.
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192179 27751 manager.go:901] ignoring container "/system.slice/var-lib-kubelet-pods-97270c63\\x2dc9a8\\x2d11e7\\x2d89f4\\x2dc6b053eac242-volues-kubernetes.io\\x7esecret-kube\\x2ddns\\x2dtoken\\x2d987zv.mount"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192193 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-57f84a53f69dda718423e2a2b069d129afa4d226103816e6fa21ee5fab8b82b-shm.mount"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192202 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-57f84a53f69dda718423e2a72b069d12afa4d226103816e6fa21ee5fab8b82b-shm.mount", but ignoring.
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192211 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-57f84a53f69dda718423e2a72b069d129afa4d226103816e6fa2ee5fab8b82b-shm.mount"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192220 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-43c233c30408e57d9b41952d031e6274ca80b99953da44b0ab20a811a1643cd.mount"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192228 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-43c233c30408e57d98b41952d01e6274ca80b99953da44b0ab20a811a1643cd.mount", but ignoring.
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192241 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-43c233c30408e57d98b41952d031e6274ca80b99953da4b0ab20a811a1643cd.mount"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192250 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-e4d771176f4a77939775443f08578068442b107ec0f99073f00532cfd4c7e31.mount"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192258 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-e4d771176f4a779391775443f0578068442b107ec0f99073f00532cfd4c7e31.mount", but ignoring.
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192268 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-e4d771176f4a779391775443f08578068442b107ec0f9973f00532cfd4c7e31.mount"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192277 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/run-docker-netns-26165e268da2.mount"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192284 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/run-docker-netns-26165e268da2.mount", but ignoring.
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192292 27751 manager.go:901] ignoring container "/system.slice/run-docker-netns-26165e268da2.mount"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192299 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-a05e8c51434768693d26caf8b1a97742f40b6197e992c2a2edcd1a9cb3b597f-shm.mount"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192307 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-a05e8c51434768693d26caf18b1a9774f40b6197e992c2a2edcd1a9cb3b597f-shm.mount", but ignoring.
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192316 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-a05e8c51434768693d26caf18b1a97742f40b6197e992c2a2edc1a9cb3b597f-shm.mount"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192325 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-de36c5b16bb7b3560a83dd6d52bbd656881ea6228a27d7578e51bcb2fa16f3f.mount"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192334 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-de36c5b16bb7b35606a83dd6d5bbd656881ea6228a27d7578e51bcb2fa16f3f.mount", but ignoring.
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192343 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-de36c5b16bb7b35606a83dd6d52bbd656881ea6228a27d578e51bcb2fa16f3f.mount"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.192366 27751 container.go:409] Start housekeeping for container "/system.slice/run-1306.scope"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.196728 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-3b26b88e613ebc9ff1ea92e255fe70cbbd3e8e67980d4be55241a1198e378cc.mount"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.196748 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-3b26b88e613ebc9ff51ea92e25fe70cbbd3e8e67980d4be55241a1198e378cc.mount", but ignoring.
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.196777 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-3b26b88e613ebc9ff51ea92e255fe70cbbd3e8e67980d4e55241a1198e378cc.mount"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.206221 27751 manager.go:989] Destroyed container: "/system.slice/run-1306.scope" (aliases: [], namespace: "")
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.206241 27751 handler.go:325] Added event &{/system.slice/run-1306.scope 2017-11-15 02:16:56.206236648 +0000 UTC containerDeletion {<nil>}}
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.227835 27751 secret.go:217] Received secret default/default-token-qjbsf containing (3) pieces of data, 1878 total bytes
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.227918 27751 atomic_writer.go:145] pod default/snginx-10 volume default-token-qjbsf: write required for target directory /var/lib/kubelet/pod/0d5631da-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.228023 27751 atomic_writer.go:160] pod default/snginx-10 volume default-token-qjbsf: performed write of new data to ts data directory: /var/lb/kubelet/pods/0d5631da-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf/..119811_15_11_02_16_56.160831717
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.228115 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "default-token-qjbsf" (UniqueName: "kubernetes.io/secret/0d561da-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-10" (UID: "0d5631da-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.228350 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"snginx-10", UID:"0d5631da-c9ab-11e7-89f4-c6b053ea242", APIVersion:"v1", ResourceVersion:"1308", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "default-token-qjbsf"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.231847 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: mesage:
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.520428 27751 volume_manager.go:366] All volumes are attached and mounted for pod "snginx-10_default(0d5631da-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.520475 27751 kuberuntime_manager.go:370] No sandbox for pod "snginx-10_default(0d5631da-c9ab-11e7-89f4-c6b053eac242)" can be found. Need to sart a new one
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.520488 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:true CreateSandbox:true SandboxID: Attempt:0 NextInitContainerToStartnil ContainersToStart:[0] ContainersToKill:map[]} for pod "snginx-10_default(0d5631da-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.520550 27751 kuberuntime_manager.go:565] SyncPod received new pod "snginx-10_default(0d5631da-c9ab-11e7-89f4-c6b053eac242)", will create a sadbox for it
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.520559 27751 kuberuntime_manager.go:574] Stopping PodSandbox for "snginx-10_default(0d5631da-c9ab-11e7-89f4-c6b053eac242)", will start new on
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.520579 27751 kuberuntime_manager.go:626] Creating sandbox for pod "snginx-10_default(0d5631da-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.523431 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.526707 27751 expiration_cache.go:98] Entry version: {key:version obj:0xc4223dcb40} has expired
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.527153 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/besteffort/pod0d5631da-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.527169 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/besteffort/pod0d5631da-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.551794 27751 eviction_manager.go:221] eviction manager: synchronize housekeeping
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.603355 27751 helpers.go:871] eviction manager: observations: signal=nodefs.inodesFree, available: 10384476, capacity: 10208Ki, time: 2017-11-5 02:16:41.962872214 +0000 UTC
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.603390 27751 helpers.go:871] eviction manager: observations: signal=imagefs.available, available: 39632Mi, capacity: 45Gi, time: 2017-11-15 0:16:41.962872214 +0000 UTC
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.603399 27751 helpers.go:873] eviction manager: observations: signal=allocatableMemory.available, available: 6884476Ki, capacity: 7393360Ki
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.603406 27751 helpers.go:873] eviction manager: observations: signal=allocatableNodeFs.available, available: 9624040228, capacity: 10198Mi
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.603413 27751 helpers.go:871] eviction manager: observations: signal=memory.available, available: 5609296Ki, capacity: 7393360Ki, time: 2017-1-15 02:16:41.962872214 +0000 UTC
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.603421 27751 helpers.go:871] eviction manager: observations: signal=nodefs.available, available: 7399696Ki, capacity: 10198Mi, time: 2017-11-5 02:16:41.962872214 +0000 UTC
Nov 15 02:16:56 af867b kubelet[27751]: I1115 02:16:56.603440 27751 eviction_manager.go:325] eviction manager: no resources are starved
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.191300 27751 config.go:282] Setting pods for source api
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.192475 27751 config.go:404] Receiving a new pod "snginx-25_default(0e141d4f-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.193059 27751 kubelet.go:1837] SyncLoop (ADD, "api"): "snginx-25_default(0e141d4f-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.193220 27751 kubelet_pods.go:1284] Generating status for "snginx-25_default(0e141d4f-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.193519 27751 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.194751 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-qjbsf" (UniqueName: "kuberetes.io/secret/0e141d4f-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-25" (UID: "0e141d4f-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.213955 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "snginx-25_default(0e141d4f-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.296945 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "default-token-qjbsf" (UniqueName: "kubernetes.io/secret/0e11d4f-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-25" (UID: "0e141d4f-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.297016 27751 secret.go:186] Setting up volume default-token-qjbsf for pod 0e141d4f-c9ab-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/0e1414f-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.297197 27751 empty_dir.go:264] pod 0e141d4f-c9ab-11e7-89f4-c6b053eac242: mounting tmpfs for volume wrapped_default-token-qjbsf
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.297222 27751 mount_linux.go:135] Mounting cmd (systemd-run) with arguments ([--description=Kubernetes transient mount for /var/lib/kubelet/pos/0e141d4f-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf --scope -- mount -t tmpfs tmpfs /var/lib/kubelet/pods/0e141d4f-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetesio~secret/default-token-qjbsf])
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.465063 27751 secret.go:217] Received secret default/default-token-qjbsf containing (3) pieces of data, 1878 total bytes
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.465176 27751 atomic_writer.go:145] pod default/snginx-25 volume default-token-qjbsf: write required for target directory /var/lib/kubelet/pod/0e141d4f-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.465287 27751 atomic_writer.go:160] pod default/snginx-25 volume default-token-qjbsf: performed write of new data to ts data directory: /var/lb/kubelet/pods/0e141d4f-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf/..119811_15_11_02_16_57.864151808
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.465396 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "default-token-qjbsf" (UniqueName: "kubernetes.io/secret/0e14d4f-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-25" (UID: "0e141d4f-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.465436 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"snginx-25", UID:"0e141d4f-c9ab-11e7-89f4-c6b053ea242", APIVersion:"v1", ResourceVersion:"1314", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "default-token-qjbsf"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.514195 27751 volume_manager.go:366] All volumes are attached and mounted for pod "snginx-25_default(0e141d4f-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.514238 27751 kuberuntime_manager.go:370] No sandbox for pod "snginx-25_default(0e141d4f-c9ab-11e7-89f4-c6b053eac242)" can be found. Need to sart a new one
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.514252 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:true CreateSandbox:true SandboxID: Attempt:0 NextInitContainerToStartnil ContainersToStart:[0] ContainersToKill:map[]} for pod "snginx-25_default(0e141d4f-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.514327 27751 kuberuntime_manager.go:565] SyncPod received new pod "snginx-25_default(0e141d4f-c9ab-11e7-89f4-c6b053eac242)", will create a sadbox for it
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.514339 27751 kuberuntime_manager.go:574] Stopping PodSandbox for "snginx-25_default(0e141d4f-c9ab-11e7-89f4-c6b053eac242)", will start new on
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.514362 27751 kuberuntime_manager.go:626] Creating sandbox for pod "snginx-25_default(0e141d4f-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.516376 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/besteffort/pod0e141d4f-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.516392 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/besteffort/pod0e141d4f-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.600967 27751 prober.go:160] HTTP-Probe Host: http://10.32.0.2, Port: 8081, Path: /readiness
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.601007 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.606469 27751 http.go:96] Probe succeeded for http://10.32.0.2:8081/readiness, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 0:16:57 GMT] Content-Length:[3] Content-Type:[text/plain; charset=utf-8]] 0xc420cc3280 3 [] true false map[] 0xc420afd300 <nil>}
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.606518 27751 prober.go:113] Readiness probe for "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242):kubedns" succeedd
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.681846 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10251, Path: /healthz
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.681906 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.682872 27751 http.go:96] Probe succeeded for http://127.0.0.1:10251/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 0216:57 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc420d82000 2 [] true false map[] 0xc420afdf00 <nil>}
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.682926 27751 prober.go:113] Liveness probe for "kube-scheduler-af867b_kube-system(bc22704d9f4dc5d62a8217cfd5c14373):kube-scheduler" succeeded
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.710661 27751 docker_sandbox.go:691] Will attempt to re-write config file /var/lib/docker/containers/5fb7fcf7c72f470128f788803221316e45e7426df6260371e72e5a65e412678/resolv.conf with:
Nov 15 02:16:57 af867b kubelet[27751]: [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local opcwlaas.oraclecloud.internal. options ndots:5]
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.710877 27751 plugins.go:392] Calling network plugin cni to set up pod "snginx-10_default"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.716034 27751 cni.go:326] Got netns path /proc/1421/ns/net
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.716048 27751 cni.go:327] Using netns path default
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.716149 27751 cni.go:298] About to add CNI network cni-loopback (type=loopback)
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.728525 27751 factory.go:112] Using factory "docker" for container "/kubepods/besteffort/pod0d5631da-c9ab-11e7-89f4-c6b053eac242/5fb7fcf7c72f40128f788803221316e45e7426df36260371e72e5a65e412678"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.732585 27751 manager.go:932] Added container: "/kubepods/besteffort/pod0d5631da-c9ab-11e7-89f4-c6b053eac242/5fb7fcf7c72f470128f788803221316e4e7426df36260371e72e5a65e412678" (aliases: [k8s_POD_snginx-10_default_0d5631da-c9ab-11e7-89f4-c6b053eac242_0 5fb7fcf7c72f470128f788803221316e45e7426df36260371e72e5a65e412678], namespace: "docker")
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.732751 27751 handler.go:325] Added event &{/kubepods/besteffort/pod0d5631da-c9ab-11e7-89f4-c6b053eac242/5fb7fcf7c72f470128f788803221316e45e746df36260371e72e5a65e412678 2017-11-15 02:16:57.189194711 +0000 UTC containerCreation {<nil>}}
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.732798 27751 factory.go:116] Factory "docker" was unable to handle container "/kubepods/besteffort/pod0e141d4f-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.732810 27751 factory.go:105] Error trying to work out if we can handle /kubepods/besteffort/pod0e141d4f-c9ab-11e7-89f4-c6b053eac242: /kubepod/besteffort/pod0e141d4f-c9ab-11e7-89f4-c6b053eac242 not handled by systemd handler
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.732816 27751 factory.go:116] Factory "systemd" was unable to handle container "/kubepods/besteffort/pod0e141d4f-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.732824 27751 factory.go:112] Using factory "raw" for container "/kubepods/besteffort/pod0e141d4f-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.732994 27751 manager.go:932] Added container: "/kubepods/besteffort/pod0e141d4f-c9ab-11e7-89f4-c6b053eac242" (aliases: [], namespace: "")
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.733110 27751 handler.go:325] Added event &{/kubepods/besteffort/pod0e141d4f-c9ab-11e7-89f4-c6b053eac242 2017-11-15 02:16:57.203194711 +0000 UC containerCreation {<nil>}}
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.733131 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/run-1482.scope"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.733140 27751 factory.go:105] Error trying to work out if we can handle /system.slice/run-1482.scope: /system.slice/run-1482.scope not handledby systemd handler
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.733145 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/run-1482.scope"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.733151 27751 factory.go:112] Using factory "raw" for container "/system.slice/run-1482.scope"
Nov 15 02:16:57 af867b kubelet[27751]: W1115 02:16:57.733219 27751 container.go:354] Failed to create summary reader for "/system.slice/run-1482.scope": none of the resources are being tracked.
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.733230 27751 manager.go:932] Added container: "/system.slice/run-1482.scope" (aliases: [], namespace: "")
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.733260 27751 handler.go:325] Added event &{/system.slice/run-1482.scope 0001-01-01 00:00:00 +0000 UTC containerCreation {<nil>}}
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.733276 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-92b5e8c6024d48911cd69220d7bd1096230880041c5fccc7046f3e126499b74.mount"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.733286 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-92b5e8c6024d489118cd69220dbd1096230880041c5fccc7046f3e126499b74.mount", but ignoring.
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.733297 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-92b5e8c6024d489118cd69220d7bd1096230880041c5fcc7046f3e126499b74.mount"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.733306 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-kubelet-pods-0d5631da\\x2dc9ab\\x2d11e7\\2d89f4\\x2dc6b053eac242-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dqjbsf.mount"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.733316 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-kubelet-pods-0d5631da\\x2dc9ab\\x2d11e7\\x2d89f4\\2dc6b053eac242-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dqjbsf.mount", but ignoring.
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.733327 27751 manager.go:901] ignoring container "/system.slice/var-lib-kubelet-pods-0d5631da\\x2dc9ab\\x2d11e7\\x2d89f4\\x2dc6b053eac242-volues-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dqjbsf.mount"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.733342 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-5fb7fcf7c72f470128f78883221316e45e7426df36260371e72e5a65e412678-shm.mount"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.733351 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-5fb7fcf7c72f470128f788803221316e5e7426df36260371e72e5a65e412678-shm.mount", but ignoring.
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.733361 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-5fb7fcf7c72f470128f788803221316e45e7426df36260371e725a65e412678-shm.mount"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.733373 27751 manager.go:989] Destroyed container: "/system.slice/run-1482.scope" (aliases: [], namespace: "")
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.733381 27751 handler.go:325] Added event &{/system.slice/run-1482.scope 2017-11-15 02:16:57.733377639 +0000 UTC containerDeletion {<nil>}}
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.733402 27751 container.go:409] Start housekeeping for container "/system.slice/run-1482.scope"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.733416 27751 container.go:409] Start housekeeping for container "/kubepods/besteffort/pod0d5631da-c9ab-11e7-89f4-c6b053eac242/5fb7fcf7c72f47028f788803221316e45e7426df36260371e72e5a65e412678"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.752447 27751 container.go:409] Start housekeeping for container "/kubepods/besteffort/pod0e141d4f-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.764449 27751 generic.go:146] GenericPLEG: 0d5631da-c9ab-11e7-89f4-c6b053eac242/5fb7fcf7c72f470128f788803221316e45e7426df36260371e72e5a65e41268: non-existent -> running
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.765929 27751 cni.go:326] Got netns path /proc/1421/ns/net
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.765940 27751 cni.go:327] Using netns path default
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.766042 27751 cni.go:298] About to add CNI network weave (type=weave-net)
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.797861 27751 config.go:282] Setting pods for source api
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.798081 27751 kuberuntime_manager.go:833] getSandboxIDByPodUID got sandbox IDs ["5fb7fcf7c72f470128f788803221316e45e7426df36260371e72e5a65e41278"] for pod "snginx-10_default(0d5631da-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.798198 27751 status_manager.go:451] Status for pod "snginx-25_default(0e141d4f-c9ab-11e7-89f4-c6b053eac242)" updated successfully: (1, {PhasePending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:16:57 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTie:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:16:57 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [nginx]} {Type:PodScheduled Status:True LastProbeTme:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:16:57 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP: StartTime:2017-11-15 02:16:57 +0000 UTC InitContainrStatuses:[] ContainerStatuses:[{Name:nginx State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Termiated:nil} Ready:false RestartCount:0 Image:nginx ImageID: ContainerID:}] QOSClass:BestEffort})
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.804291 27751 generic.go:345] PLEG: Write status for snginx-10/default: &container.PodStatus{ID:"0d5631da-c9ab-11e7-89f4-c6b053eac242", Name:"nginx-10", Namespace:"default", IP:"", ContainerStatuses:[]*container.ContainerStatus{}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc421e424b0)}} (err: <nil>)
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.804346 27751 kubelet.go:1871] SyncLoop (PLEG): "snginx-10_default(0d5631da-c9ab-11e7-89f4-c6b053eac242)", event: &pleg.PodLifecycleEvent{ID:"d5631da-c9ab-11e7-89f4-c6b053eac242", Type:"ContainerStarted", Data:"5fb7fcf7c72f470128f788803221316e45e7426df36260371e72e5a65e412678"}
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.807456 27751 kubelet.go:1850] SyncLoop (RECONCILE, "api"): "snginx-25_default(0e141d4f-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.914004 27751 config.go:282] Setting pods for source api
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.915783 27751 config.go:404] Receiving a new pod "snginx-16_default(0e4740f5-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.916308 27751 kubelet.go:1837] SyncLoop (ADD, "api"): "snginx-16_default(0e4740f5-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.916443 27751 kubelet_pods.go:1284] Generating status for "snginx-16_default(0e4740f5-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.917493 27751 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.923054 27751 factory.go:116] Factory "docker" was unable to handle container "/kubepods/besteffort/pod0e4740f5-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.926198 27751 factory.go:105] Error trying to work out if we can handle /kubepods/besteffort/pod0e4740f5-c9ab-11e7-89f4-c6b053eac242: /kubepod/besteffort/pod0e4740f5-c9ab-11e7-89f4-c6b053eac242 not handled by systemd handler
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.926212 27751 factory.go:116] Factory "systemd" was unable to handle container "/kubepods/besteffort/pod0e4740f5-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.926220 27751 factory.go:112] Using factory "raw" for container "/kubepods/besteffort/pod0e4740f5-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.927546 27751 manager.go:932] Added container: "/kubepods/besteffort/pod0e4740f5-c9ab-11e7-89f4-c6b053eac242" (aliases: [], namespace: "")
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.933104 27751 handler.go:325] Added event &{/kubepods/besteffort/pod0e4740f5-c9ab-11e7-89f4-c6b053eac242 2017-11-15 02:16:57.920194711 +0000 UC containerCreation {<nil>}}
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.933172 27751 container.go:409] Start housekeeping for container "/kubepods/besteffort/pod0e4740f5-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.943182 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "snginx-16_default(0e4740f5-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.946662 27751 status_manager.go:451] Status for pod "snginx-16_default(0e4740f5-c9ab-11e7-89f4-c6b053eac242)" updated successfully: (1, {PhasePending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:16:57 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTie:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:16:57 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [nginx]} {Type:PodScheduled Status:True LastProbeTme:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:16:57 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP: StartTime:2017-11-15 02:16:57 +0000 UTC InitContainrStatuses:[] ContainerStatuses:[{Name:nginx State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Termiated:nil} Ready:false RestartCount:0 Image:nginx ImageID: ContainerID:}] QOSClass:BestEffort})
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.946829 27751 config.go:282] Setting pods for source api
Nov 15 02:16:57 af867b kubelet[27751]: I1115 02:16:57.948518 27751 kubelet.go:1850] SyncLoop (RECONCILE, "api"): "snginx-16_default(0e4740f5-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.096722 27751 kuberuntime_manager.go:640] Created PodSandbox "5fb7fcf7c72f470128f788803221316e45e7426df36260371e72e5a65e412678" for pod "sngin-10_default(0d5631da-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.116791 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-qjbsf" (UniqueName: "kuberetes.io/secret/0e4740f5-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-16" (UID: "0e4740f5-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.176584 27751 config.go:282] Setting pods for source api
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.177575 27751 config.go:404] Receiving a new pod "snginx-28_default(0eb57d14-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.179040 27751 kubelet.go:1837] SyncLoop (ADD, "api"): "snginx-28_default(0eb57d14-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.179193 27751 kubelet_pods.go:1284] Generating status for "snginx-28_default(0eb57d14-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.179473 27751 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.185043 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "snginx-28_default(0eb57d14-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.185106 27751 factory.go:116] Factory "docker" was unable to handle container "/kubepods/besteffort/pod0eb57d14-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.185121 27751 factory.go:105] Error trying to work out if we can handle /kubepods/besteffort/pod0eb57d14-c9ab-11e7-89f4-c6b053eac242: /kubepod/besteffort/pod0eb57d14-c9ab-11e7-89f4-c6b053eac242 not handled by systemd handler
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.185128 27751 factory.go:116] Factory "systemd" was unable to handle container "/kubepods/besteffort/pod0eb57d14-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.185137 27751 factory.go:112] Using factory "raw" for container "/kubepods/besteffort/pod0eb57d14-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.185286 27751 manager.go:932] Added container: "/kubepods/besteffort/pod0eb57d14-c9ab-11e7-89f4-c6b053eac242" (aliases: [], namespace: "")
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.185390 27751 handler.go:325] Added event &{/kubepods/besteffort/pod0eb57d14-c9ab-11e7-89f4-c6b053eac242 2017-11-15 02:16:58.183194711 +0000 UC containerCreation {<nil>}}
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.185427 27751 container.go:409] Start housekeeping for container "/kubepods/besteffort/pod0eb57d14-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.210909 27751 status_manager.go:451] Status for pod "snginx-28_default(0eb57d14-c9ab-11e7-89f4-c6b053eac242)" updated successfully: (1, {PhasePending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:16:58 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTie:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:16:58 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [nginx]} {Type:PodScheduled Status:True LastProbeTme:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:16:58 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP: StartTime:2017-11-15 02:16:58 +0000 UTC InitContainrStatuses:[] ContainerStatuses:[{Name:nginx State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Termiated:nil} Ready:false RestartCount:0 Image:nginx ImageID: ContainerID:}] QOSClass:BestEffort})
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.211040 27751 config.go:282] Setting pods for source api
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.213793 27751 kubelet.go:1850] SyncLoop (RECONCILE, "api"): "snginx-28_default(0eb57d14-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.217167 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "default-token-qjbsf" (UniqueName: "kubernetes.io/secret/0e440f5-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-16" (UID: "0e4740f5-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.217206 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-qjbsf" (UniqueName: "kuberetes.io/secret/0eb57d14-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-28" (UID: "0eb57d14-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.217281 27751 secret.go:186] Setting up volume default-token-qjbsf for pod 0e4740f5-c9ab-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/0e474f5-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.217455 27751 empty_dir.go:264] pod 0e4740f5-c9ab-11e7-89f4-c6b053eac242: mounting tmpfs for volume wrapped_default-token-qjbsf
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.217473 27751 mount_linux.go:135] Mounting cmd (systemd-run) with arguments ([--description=Kubernetes transient mount for /var/lib/kubelet/pos/0e4740f5-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf --scope -- mount -t tmpfs tmpfs /var/lib/kubelet/pods/0e4740f5-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetesio~secret/default-token-qjbsf])
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.229494 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/run-1596.scope"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.229524 27751 factory.go:105] Error trying to work out if we can handle /system.slice/run-1596.scope: /system.slice/run-1596.scope not handledby systemd handler
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.229534 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/run-1596.scope"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.229542 27751 factory.go:112] Using factory "raw" for container "/system.slice/run-1596.scope"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.229715 27751 manager.go:932] Added container: "/system.slice/run-1596.scope" (aliases: [], namespace: "")
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.229834 27751 handler.go:325] Added event &{/system.slice/run-1596.scope 2017-11-15 02:16:58.227194711 +0000 UTC containerCreation {<nil>}}
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.229860 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/run-docker-netns-8930558db511.mount"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.229867 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/run-docker-netns-8930558db511.mount", but ignoring.
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.229877 27751 manager.go:901] ignoring container "/system.slice/run-docker-netns-8930558db511.mount"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.229884 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-kubelet-pods-0e141d4f\\x2dc9ab\\x2d11e7\\2d89f4\\x2dc6b053eac242-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dqjbsf.mount"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.229897 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-kubelet-pods-0e141d4f\\x2dc9ab\\x2d11e7\\x2d89f4\\2dc6b053eac242-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dqjbsf.mount", but ignoring.
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.229908 27751 manager.go:901] ignoring container "/system.slice/var-lib-kubelet-pods-0e141d4f\\x2dc9ab\\x2d11e7\\x2d89f4\\x2dc6b053eac242-volues-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dqjbsf.mount"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.229932 27751 container.go:409] Start housekeeping for container "/system.slice/run-1596.scope"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.238462 27751 manager.go:989] Destroyed container: "/system.slice/run-1596.scope" (aliases: [], namespace: "")
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.238486 27751 handler.go:325] Added event &{/system.slice/run-1596.scope 2017-11-15 02:16:58.238479785 +0000 UTC containerDeletion {<nil>}}
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.318103 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "default-token-qjbsf" (UniqueName: "kubernetes.io/secret/0eb7d14-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-28" (UID: "0eb57d14-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.318178 27751 secret.go:186] Setting up volume default-token-qjbsf for pod 0eb57d14-c9ab-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/0eb5714-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.318355 27751 empty_dir.go:264] pod 0eb57d14-c9ab-11e7-89f4-c6b053eac242: mounting tmpfs for volume wrapped_default-token-qjbsf
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.318380 27751 mount_linux.go:135] Mounting cmd (systemd-run) with arguments ([--description=Kubernetes transient mount for /var/lib/kubelet/pos/0eb57d14-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf --scope -- mount -t tmpfs tmpfs /var/lib/kubelet/pods/0eb57d14-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetesio~secret/default-token-qjbsf])
Nov 15 02:16:58 af867b kubelet[27751]: W1115 02:16:58.357080 27751 raw.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-1603.scope": 0x40000100 == IN_CREATE|IN_ISIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-1603.scope: no such file or directory
Nov 15 02:16:58 af867b kubelet[27751]: W1115 02:16:58.357833 27751 raw.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-1603.scope": 0x40000100 == IN_CREATE|IN_ISDIR): notify_add_watch /sys/fs/cgroup/blkio/system.slice/run-1603.scope: no such file or directory
Nov 15 02:16:58 af867b kubelet[27751]: W1115 02:16:58.358627 27751 raw.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-1603.scope": 0x40000100 == IN_CREATE|IN_ISDIR):inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-1603.scope: no such file or directory
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.359187 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-kubelet-pods-0e4740f5\\x2dc9ab\\x2d11e7\\2d89f4\\x2dc6b053eac242-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dqjbsf.mount"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.359937 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-kubelet-pods-0e4740f5\\x2dc9ab\\x2d11e7\\x2d89f4\\2dc6b053eac242-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dqjbsf.mount", but ignoring.
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.359951 27751 manager.go:901] ignoring container "/system.slice/var-lib-kubelet-pods-0e4740f5\\x2dc9ab\\x2d11e7\\x2d89f4\\x2dc6b053eac242-volues-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dqjbsf.mount"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.366666 27751 secret.go:217] Received secret default/default-token-qjbsf containing (3) pieces of data, 1878 total bytes
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.366768 27751 atomic_writer.go:145] pod default/snginx-28 volume default-token-qjbsf: write required for target directory /var/lib/kubelet/pod/0eb57d14-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.366863 27751 atomic_writer.go:160] pod default/snginx-28 volume default-token-qjbsf: performed write of new data to ts data directory: /var/lb/kubelet/pods/0eb57d14-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf/..119811_15_11_02_16_58.948849759
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.366955 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "default-token-qjbsf" (UniqueName: "kubernetes.io/secret/0eb5d14-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-28" (UID: "0eb57d14-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.367216 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"snginx-28", UID:"0eb57d14-c9ab-11e7-89f4-c6b053ea242", APIVersion:"v1", ResourceVersion:"1325", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "default-token-qjbsf"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.378118 27751 kuberuntime_manager.go:654] Determined the ip "10.32.0.3" for pod "snginx-10_default(0d5631da-c9ab-11e7-89f4-c6b053eac242)" afte sandbox changed
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.378227 27751 kuberuntime_manager.go:705] Creating container &Container{Name:nginx,Image:nginx,Command:[],Args:[],WorkingDir:,Ports:[{ 0 80 TC }],Env:[],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[{default-token-qjbsf true /var/run/secrets/kubernetes.io/serviceaccount <nil>}],LivenessProb:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[],TerminationMessagePolic:File,} in pod snginx-10_default(0d5631da-c9ab-11e7-89f4-c6b053eac242)
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.380710 27751 provider.go:119] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.380760 27751 config.go:131] looking for config.json at /var/lib/kubelet/config.json
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.380802 27751 config.go:131] looking for config.json at /config.json
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.380817 27751 config.go:131] looking for config.json at /.docker/config.json
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.380825 27751 config.go:131] looking for config.json at /.docker/config.json
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.380838 27751 config.go:101] looking for .dockercfg at /var/lib/kubelet/.dockercfg
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.380849 27751 config.go:101] looking for .dockercfg at /.dockercfg
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.380860 27751 config.go:101] looking for .dockercfg at /.dockercfg
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.380867 27751 config.go:101] looking for .dockercfg at /.dockercfg
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.380876 27751 provider.go:89] Unable to parse Docker config file: couldn't find valid .dockercfg after checking in [/var/lib/kubelet /]
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.380888 27751 kuberuntime_image.go:46] Pulling image "docker.io/library/nginx:latest" without credentials
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.380942 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"snginx-10", UID:"0d5631da-c9ab-11e7-89f4-c6b053ea242", APIVersion:"v1", ResourceVersion:"1308", FieldPath:"spec.containers{nginx}"}): type: 'Normal' reason: 'Pulling' pulling image "nginx"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.485875 27751 volume_manager.go:366] All volumes are attached and mounted for pod "snginx-28_default(0eb57d14-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.485927 27751 kuberuntime_manager.go:370] No sandbox for pod "snginx-28_default(0eb57d14-c9ab-11e7-89f4-c6b053eac242)" can be found. Need to sart a new one
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.485942 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:true CreateSandbox:true SandboxID: Attempt:0 NextInitContainerToStartnil ContainersToStart:[0] ContainersToKill:map[]} for pod "snginx-28_default(0eb57d14-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.486006 27751 kuberuntime_manager.go:565] SyncPod received new pod "snginx-28_default(0eb57d14-c9ab-11e7-89f4-c6b053eac242)", will create a sadbox for it
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.486015 27751 kuberuntime_manager.go:574] Stopping PodSandbox for "snginx-28_default(0eb57d14-c9ab-11e7-89f4-c6b053eac242)", will start new on
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.486035 27751 kuberuntime_manager.go:626] Creating sandbox for pod "snginx-28_default(0eb57d14-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.490113 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/besteffort/pod0eb57d14-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.490131 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/besteffort/pod0eb57d14-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.524080 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.639816 27751 secret.go:217] Received secret default/default-token-qjbsf containing (3) pieces of data, 1878 total bytes
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.639901 27751 atomic_writer.go:145] pod default/snginx-16 volume default-token-qjbsf: write required for target directory /var/lib/kubelet/pod/0e4740f5-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.639982 27751 atomic_writer.go:160] pod default/snginx-16 volume default-token-qjbsf: performed write of new data to ts data directory: /var/lb/kubelet/pods/0e4740f5-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf/..119811_15_11_02_16_58.130212914
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.640068 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "default-token-qjbsf" (UniqueName: "kubernetes.io/secret/0e470f5-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-16" (UID: "0e4740f5-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.640308 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"snginx-16", UID:"0e4740f5-c9ab-11e7-89f4-c6b053ea242", APIVersion:"v1", ResourceVersion:"1320", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "default-token-qjbsf"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.847645 27751 volume_manager.go:366] All volumes are attached and mounted for pod "snginx-16_default(0e4740f5-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.847699 27751 kuberuntime_manager.go:370] No sandbox for pod "snginx-16_default(0e4740f5-c9ab-11e7-89f4-c6b053eac242)" can be found. Need to sart a new one
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.847713 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:true CreateSandbox:true SandboxID: Attempt:0 NextInitContainerToStartnil ContainersToStart:[0] ContainersToKill:map[]} for pod "snginx-16_default(0e4740f5-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.847772 27751 kuberuntime_manager.go:565] SyncPod received new pod "snginx-16_default(0e4740f5-c9ab-11e7-89f4-c6b053eac242)", will create a sadbox for it
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.847780 27751 kuberuntime_manager.go:574] Stopping PodSandbox for "snginx-16_default(0e4740f5-c9ab-11e7-89f4-c6b053eac242)", will start new on
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.847801 27751 kuberuntime_manager.go:626] Creating sandbox for pod "snginx-16_default(0e4740f5-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.852428 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/besteffort/pod0e4740f5-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:16:58 af867b kubelet[27751]: I1115 02:16:58.852447 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/besteffort/pod0e4740f5-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:16:59 af867b kubelet[27751]: I1115 02:16:59.214620 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10252, Path: /healthz
Nov 15 02:16:59 af867b kubelet[27751]: I1115 02:16:59.214670 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:16:59 af867b kubelet[27751]: I1115 02:16:59.215970 27751 http.go:96] Probe succeeded for http://127.0.0.1:10252/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 0216:59 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc421bf2da0 2 [] true false map[] 0xc421e41500 <nil>}
Nov 15 02:16:59 af867b kubelet[27751]: I1115 02:16:59.216017 27751 prober.go:113] Liveness probe for "kube-controller-manager-af867b_kube-system(f49ee4da5c66af63a0b4bcea4f69baf9):kube-controller-anager" succeeded
Nov 15 02:16:59 af867b kubelet[27751]: I1115 02:16:59.234459 27751 docker_sandbox.go:691] Will attempt to re-write config file /var/lib/docker/containers/b5b36e0aa65ce37565a6b968f3df8c3fbe6f5bf70af61470f9c8b3e1f835576/resolv.conf with:
Nov 15 02:16:59 af867b kubelet[27751]: [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local opcwlaas.oraclecloud.internal. options ndots:5]
Nov 15 02:16:59 af867b kubelet[27751]: I1115 02:16:59.234625 27751 plugins.go:392] Calling network plugin cni to set up pod "snginx-25_default"
Nov 15 02:16:59 af867b kubelet[27751]: I1115 02:16:59.238777 27751 factory.go:112] Using factory "docker" for container "/kubepods/besteffort/pod0e141d4f-c9ab-11e7-89f4-c6b053eac242/b5b36e0aa65ce7565a6b968f3df8c3fbe6f5bf701af61470f9c8b3e1f835576"
Nov 15 02:16:59 af867b kubelet[27751]: I1115 02:16:59.419575 27751 cni.go:326] Got netns path /proc/1642/ns/net
Nov 15 02:16:59 af867b kubelet[27751]: I1115 02:16:59.419591 27751 cni.go:327] Using netns path default
Nov 15 02:16:59 af867b kubelet[27751]: I1115 02:16:59.419731 27751 cni.go:298] About to add CNI network cni-loopback (type=loopback)
Nov 15 02:16:59 af867b kubelet[27751]: I1115 02:16:59.436579 27751 generic.go:146] GenericPLEG: 0e141d4f-c9ab-11e7-89f4-c6b053eac242/b5b36e0aa65ce37565a6b968f3df8c3fbe6f5bf701af61470f9c8b3e1f83556: non-existent -> running
Nov 15 02:16:59 af867b kubelet[27751]: I1115 02:16:59.442961 27751 cni.go:326] Got netns path /proc/1642/ns/net
Nov 15 02:16:59 af867b kubelet[27751]: I1115 02:16:59.442975 27751 cni.go:327] Using netns path default
Nov 15 02:16:59 af867b kubelet[27751]: I1115 02:16:59.443092 27751 cni.go:298] About to add CNI network weave (type=weave-net)
Nov 15 02:16:59 af867b kubelet[27751]: I1115 02:16:59.555930 27751 manager.go:932] Added container: "/kubepods/besteffort/pod0e141d4f-c9ab-11e7-89f4-c6b053eac242/b5b36e0aa65ce37565a6b968f3df8c3fb6f5bf701af61470f9c8b3e1f835576" (aliases: [k8s_POD_snginx-25_default_0e141d4f-c9ab-11e7-89f4-c6b053eac242_0 b5b36e0aa65ce37565a6b968f3df8c3fbe6f5bf701af61470f9c8b3e1f835576], namespace: "docker")
Nov 15 02:16:59 af867b kubelet[27751]: I1115 02:16:59.556083 27751 handler.go:325] Added event &{/kubepods/besteffort/pod0e141d4f-c9ab-11e7-89f4-c6b053eac242/b5b36e0aa65ce37565a6b968f3df8c3fbe6f5f701af61470f9c8b3e1f835576 2017-11-15 02:16:58.497194711 +0000 UTC containerCreation {<nil>}}
Nov 15 02:16:59 af867b kubelet[27751]: I1115 02:16:59.556131 27751 container.go:409] Start housekeeping for container "/kubepods/besteffort/pod0e141d4f-c9ab-11e7-89f4-c6b053eac242/b5b36e0aa65ce3765a6b968f3df8c3fbe6f5bf701af61470f9c8b3e1f835576"
Nov 15 02:16:59 af867b kubelet[27751]: I1115 02:16:59.676375 27751 kuberuntime_manager.go:640] Created PodSandbox "b5b36e0aa65ce37565a6b968f3df8c3fbe6f5bf701af61470f9c8b3e1f835576" for pod "sngin-25_default(0e141d4f-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:16:59 af867b kubelet[27751]: I1115 02:16:59.917614 27751 kuberuntime_manager.go:654] Determined the ip "10.32.0.4" for pod "snginx-25_default(0e141d4f-c9ab-11e7-89f4-c6b053eac242)" afte sandbox changed
Nov 15 02:16:59 af867b kubelet[27751]: I1115 02:16:59.917769 27751 kuberuntime_manager.go:705] Creating container &Container{Name:nginx,Image:nginx,Command:[],Args:[],WorkingDir:,Ports:[{ 0 80 TC }],Env:[],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[{default-token-qjbsf true /var/run/secrets/kubernetes.io/serviceaccount <nil>}],LivenessProb:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[],TerminationMessagePolic:File,} in pod snginx-25_default(0e141d4f-c9ab-11e7-89f4-c6b053eac242)
Nov 15 02:16:59 af867b kubelet[27751]: I1115 02:16:59.919195 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"snginx-25", UID:"0e141d4f-c9ab-11e7-89f4-c6b053ea242", APIVersion:"v1", ResourceVersion:"1314", FieldPath:"spec.containers{nginx}"}): type: 'Normal' reason: 'Pulling' pulling image "nginx"
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.449784 27751 factory.go:112] Using factory "docker" for container "/kubepods/besteffort/pod0eb57d14-c9ab-11e7-89f4-c6b053eac242/0fab8639e26123f063ff03cc306506a34283f24ab70834abda8a699113657cc"
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.451830 27751 kuberuntime_manager.go:833] getSandboxIDByPodUID got sandbox IDs ["b5b36e0aa65ce37565a6b968f3df8c3fbe6f5bf701af61470f9c8b3e1f83576"] for pod "snginx-25_default(0e141d4f-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.452469 27751 docker_sandbox.go:691] Will attempt to re-write config file /var/lib/docker/containers/0fab8639e261263f063ff03cc306506a34283f24a70834abda8a699113657cc/resolv.conf with:
Nov 15 02:17:00 af867b kubelet[27751]: [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local opcwlaas.oraclecloud.internal. options ndots:5]
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.452618 27751 plugins.go:392] Calling network plugin cni to set up pod "snginx-28_default"
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.454516 27751 manager.go:932] Added container: "/kubepods/besteffort/pod0eb57d14-c9ab-11e7-89f4-c6b053eac242/0fab8639e261263f063ff03cc306506a3283f24ab70834abda8a699113657cc" (aliases: [k8s_POD_snginx-28_default_0eb57d14-c9ab-11e7-89f4-c6b053eac242_0 0fab8639e261263f063ff03cc306506a34283f24ab70834abda8a699113657cc], namespace: "docker")
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.454679 27751 handler.go:325] Added event &{/kubepods/besteffort/pod0eb57d14-c9ab-11e7-89f4-c6b053eac242/0fab8639e261263f063ff03cc306506a3428324ab70834abda8a699113657cc 2017-11-15 02:16:59.698194711 +0000 UTC containerCreation {<nil>}}
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.455024 27751 container.go:409] Start housekeeping for container "/kubepods/besteffort/pod0eb57d14-c9ab-11e7-89f4-c6b053eac242/0fab8639e261263063ff03cc306506a34283f24ab70834abda8a699113657cc"
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.465914 27751 cni.go:326] Got netns path /proc/1844/ns/net
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.465946 27751 cni.go:327] Using netns path default
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.466289 27751 cni.go:298] About to add CNI network cni-loopback (type=loopback)
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.530175 27751 cni.go:326] Got netns path /proc/1844/ns/net
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.530196 27751 cni.go:327] Using netns path default
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.532189 27751 cni.go:298] About to add CNI network weave (type=weave-net)
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.823489 27751 factory.go:112] Using factory "docker" for container "/kubepods/besteffort/pod0e4740f5-c9ab-11e7-89f4-c6b053eac242/66b9c434d36eb27e5ef517d07f9a34fc5e788400450810f5065e3e2ce1a0d07"
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.825773 27751 manager.go:932] Added container: "/kubepods/besteffort/pod0e4740f5-c9ab-11e7-89f4-c6b053eac242/66b9c434d36eb927e5ef517d07f9a34fce788400450810f5065e3e2ce1a0d07" (aliases: [k8s_POD_snginx-16_default_0e4740f5-c9ab-11e7-89f4-c6b053eac242_0 66b9c434d36eb927e5ef517d07f9a34fc5e788400450810f5065e3e2ce1a0d07], namespace: "docker")
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.825912 27751 handler.go:325] Added event &{/kubepods/besteffort/pod0e4740f5-c9ab-11e7-89f4-c6b053eac242/66b9c434d36eb927e5ef517d07f9a34fc5e78400450810f5065e3e2ce1a0d07 2017-11-15 02:17:00.215194711 +0000 UTC containerCreation {<nil>}}
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.825953 27751 container.go:409] Start housekeeping for container "/kubepods/besteffort/pod0e4740f5-c9ab-11e7-89f4-c6b053eac242/66b9c434d36eb92e5ef517d07f9a34fc5e788400450810f5065e3e2ce1a0d07"
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.832924 27751 kuberuntime_manager.go:640] Created PodSandbox "0fab8639e261263f063ff03cc306506a34283f24ab70834abda8a699113657cc" for pod "sngin-28_default(0eb57d14-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.835053 27751 generic.go:345] PLEG: Write status for snginx-25/default: &container.PodStatus{ID:"0e141d4f-c9ab-11e7-89f4-c6b053eac242", Name:"nginx-25", Namespace:"default", IP:"10.32.0.4", ContainerStatuses:[]*container.ContainerStatus{}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc4209887d0)}} (err: <nil>
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.835492 27751 docker_sandbox.go:691] Will attempt to re-write config file /var/lib/docker/containers/66b9c434d36eb927e5ef517d07f9a34fc5e78840050810f5065e3e2ce1a0d07/resolv.conf with:
Nov 15 02:17:00 af867b kubelet[27751]: [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local opcwlaas.oraclecloud.internal. options ndots:5]
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.835659 27751 plugins.go:392] Calling network plugin cni to set up pod "snginx-16_default"
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.836174 27751 kubelet.go:1871] SyncLoop (PLEG): "snginx-25_default(0e141d4f-c9ab-11e7-89f4-c6b053eac242)", event: &pleg.PodLifecycleEvent{ID:"e141d4f-c9ab-11e7-89f4-c6b053eac242", Type:"ContainerStarted", Data:"b5b36e0aa65ce37565a6b968f3df8c3fbe6f5bf701af61470f9c8b3e1f835576"}
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.836212 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.848570 27751 cni.go:326] Got netns path /proc/1928/ns/net
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.848583 27751 cni.go:327] Using netns path default
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.848737 27751 cni.go:298] About to add CNI network cni-loopback (type=loopback)
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.881339 27751 cni.go:326] Got netns path /proc/1928/ns/net
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.881585 27751 cni.go:327] Using netns path default
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.881734 27751 cni.go:298] About to add CNI network weave (type=weave-net)
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.900226 27751 kuberuntime_manager.go:654] Determined the ip "10.32.0.5" for pod "snginx-28_default(0eb57d14-c9ab-11e7-89f4-c6b053eac242)" afte sandbox changed
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.901402 27751 kuberuntime_manager.go:705] Creating container &Container{Name:nginx,Image:nginx,Command:[],Args:[],WorkingDir:,Ports:[{ 0 80 TC }],Env:[],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[{default-token-qjbsf true /var/run/secrets/kubernetes.io/serviceaccount <nil>}],LivenessProb:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[],TerminationMessagePolic:File,} in pod snginx-28_default(0eb57d14-c9ab-11e7-89f4-c6b053eac242)
Nov 15 02:17:00 af867b kubelet[27751]: I1115 02:17:00.908045 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"snginx-28", UID:"0eb57d14-c9ab-11e7-89f4-c6b053ea242", APIVersion:"v1", ResourceVersion:"1325", FieldPath:"spec.containers{nginx}"}): type: 'Normal' reason: 'Pulling' pulling image "nginx"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.015010 27751 config.go:282] Setting pods for source api
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.016617 27751 config.go:404] Receiving a new pod "snginx-1_default(106570f0-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.018534 27751 kubelet.go:1837] SyncLoop (ADD, "api"): "snginx-1_default(106570f0-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.018686 27751 kubelet_pods.go:1284] Generating status for "snginx-1_default(106570f0-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.025996 27751 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.028655 27751 factory.go:116] Factory "docker" was unable to handle container "/kubepods/besteffort/pod106570f0-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.028697 27751 factory.go:105] Error trying to work out if we can handle /kubepods/besteffort/pod106570f0-c9ab-11e7-89f4-c6b053eac242: /kubepod/besteffort/pod106570f0-c9ab-11e7-89f4-c6b053eac242 not handled by systemd handler
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.028705 27751 factory.go:116] Factory "systemd" was unable to handle container "/kubepods/besteffort/pod106570f0-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.028713 27751 factory.go:112] Using factory "raw" for container "/kubepods/besteffort/pod106570f0-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.028963 27751 manager.go:932] Added container: "/kubepods/besteffort/pod106570f0-c9ab-11e7-89f4-c6b053eac242" (aliases: [], namespace: "")
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.029177 27751 handler.go:325] Added event &{/kubepods/besteffort/pod106570f0-c9ab-11e7-89f4-c6b053eac242 2017-11-15 02:17:01.027194711 +0000 UC containerCreation {<nil>}}
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.029238 27751 container.go:409] Start housekeeping for container "/kubepods/besteffort/pod106570f0-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.045700 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "snginx-1_default(106570f0-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.094545 27751 config.go:282] Setting pods for source api
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.096928 27751 status_manager.go:451] Status for pod "snginx-1_default(106570f0-c9ab-11e7-89f4-c6b053eac242)" updated successfully: (1, {Phase:ending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:17:01 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTim:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:17:01 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [nginx]} {Type:PodScheduled Status:True LastProbeTie:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:17:00 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP: StartTime:2017-11-15 02:17:01 +0000 UTC InitContaineStatuses:[] ContainerStatuses:[{Name:nginx State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminted:nil} Ready:false RestartCount:0 Image:nginx ImageID: ContainerID:}] QOSClass:BestEffort})
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.101040 27751 kubelet.go:1850] SyncLoop (RECONCILE, "api"): "snginx-1_default(106570f0-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.225190 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-qjbsf" (UniqueName: "kuberetes.io/secret/106570f0-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-1" (UID: "106570f0-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.289495 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: mesage:
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.332296 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "default-token-qjbsf" (UniqueName: "kubernetes.io/secret/10670f0-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-1" (UID: "106570f0-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.332370 27751 secret.go:186] Setting up volume default-token-qjbsf for pod 106570f0-c9ab-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/10657f0-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.332580 27751 empty_dir.go:264] pod 106570f0-c9ab-11e7-89f4-c6b053eac242: mounting tmpfs for volume wrapped_default-token-qjbsf
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.332602 27751 mount_linux.go:135] Mounting cmd (systemd-run) with arguments ([--description=Kubernetes transient mount for /var/lib/kubelet/pos/106570f0-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf --scope -- mount -t tmpfs tmpfs /var/lib/kubelet/pods/106570f0-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetesio~secret/default-token-qjbsf])
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.365002 27751 kuberuntime_manager.go:640] Created PodSandbox "66b9c434d36eb927e5ef517d07f9a34fc5e788400450810f5065e3e2ce1a0d07" for pod "sngin-16_default(0e4740f5-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:01 af867b kubelet[27751]: W1115 02:17:01.382681 27751 raw.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-2090.scope": 0x40000100 == IN_CREATE|IN_ISIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-2090.scope: no such file or directory
Nov 15 02:17:01 af867b kubelet[27751]: W1115 02:17:01.382733 27751 raw.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-2090.scope": 0x40000100 == IN_CREATE|IN_ISDIR): notify_add_watch /sys/fs/cgroup/blkio/system.slice/run-2090.scope: no such file or directory
Nov 15 02:17:01 af867b kubelet[27751]: W1115 02:17:01.383509 27751 raw.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-2090.scope": 0x40000100 == IN_CREATE|IN_ISDIR):inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-2090.scope: no such file or directory
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.383691 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-kubelet-pods-0eb57d14\\x2dc9ab\\x2d11e7\\2d89f4\\x2dc6b053eac242-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dqjbsf.mount"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.383712 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-kubelet-pods-0eb57d14\\x2dc9ab\\x2d11e7\\x2d89f4\\2dc6b053eac242-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dqjbsf.mount", but ignoring.
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.383729 27751 manager.go:901] ignoring container "/system.slice/var-lib-kubelet-pods-0eb57d14\\x2dc9ab\\x2d11e7\\x2d89f4\\x2dc6b053eac242-volues-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dqjbsf.mount"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.388446 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-66b9c434d36eb927e5ef51707f9a34fc5e788400450810f5065e3e2ce1a0d07-shm.mount"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.388476 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-66b9c434d36eb927e5ef517d07f9a34f5e788400450810f5065e3e2ce1a0d07-shm.mount", but ignoring.
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.388489 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-66b9c434d36eb927e5ef517d07f9a34fc5e788400450810f50653e2ce1a0d07-shm.mount"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.388500 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-11d04dfdbcc14d0839b62173cbec9d6618e35afc75e3122e0b5e582cec7bd24.mount"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.388509 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-11d04dfdbcc14d083d9b62173cec9d6618e35afc75e3122e0b5e582cec7bd24.mount", but ignoring.
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.388519 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-11d04dfdbcc14d083d9b62173cbec9d6618e35afc75e312e0b5e582cec7bd24.mount"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.388528 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-0fab8639e261263f063ff03c306506a34283f24ab70834abda8a699113657cc-shm.mount"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.388536 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-0fab8639e261263f063ff03cc306506a4283f24ab70834abda8a699113657cc-shm.mount", but ignoring.
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.388545 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-0fab8639e261263f063ff03cc306506a34283f24ab70834abda8699113657cc-shm.mount"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.388554 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/run-docker-netns-cb26277dba54.mount"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.388575 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/run-docker-netns-cb26277dba54.mount", but ignoring.
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.388588 27751 manager.go:901] ignoring container "/system.slice/run-docker-netns-cb26277dba54.mount"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.388596 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/run-docker-netns-b3cd55c6dd34.mount"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.388603 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/run-docker-netns-b3cd55c6dd34.mount", but ignoring.
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.388611 27751 manager.go:901] ignoring container "/system.slice/run-docker-netns-b3cd55c6dd34.mount"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.393069 27751 secret.go:217] Received secret default/default-token-qjbsf containing (3) pieces of data, 1878 total bytes
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.393161 27751 atomic_writer.go:145] pod default/snginx-1 volume default-token-qjbsf: write required for target directory /var/lib/kubelet/pods106570f0-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.393274 27751 atomic_writer.go:160] pod default/snginx-1 volume default-token-qjbsf: performed write of new data to ts data directory: /var/li/kubelet/pods/106570f0-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf/..119811_15_11_02_17_01.269455337
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.393396 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "default-token-qjbsf" (UniqueName: "kubernetes.io/secret/10650f0-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-1" (UID: "106570f0-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.393637 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"snginx-1", UID:"106570f0-c9ab-11e7-89f4-c6b053eac42", APIVersion:"v1", ResourceVersion:"1337", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "default-token-qjbsf"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.398152 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-c1fb4b3943b5e595ebc1e33980b6b2bca6fac3ac5b74374d709559e02d55896.mount"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.398177 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-c1fb4b3943b5e595ecbc1e3398b6b2bca6fac3ac5b74374d709559e02d55896.mount", but ignoring.
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.398188 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-c1fb4b3943b5e595ecbc1e33980b6b2bca6fac3ac5b7434d709559e02d55896.mount"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.401591 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-b5b36e0aa65ce37565a6b96f3df8c3fbe6f5bf701af61470f9c8b3e1f835576-shm.mount"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.401608 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-containers-b5b36e0aa65ce37565a6b968f3df8c3fe6f5bf701af61470f9c8b3e1f835576-shm.mount", but ignoring.
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.401622 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-containers-b5b36e0aa65ce37565a6b968f3df8c3fbe6f5bf701af61470f9cb3e1f835576-shm.mount"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.410344 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/run-docker-netns-9a5667b14714.mount"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.410372 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/run-docker-netns-9a5667b14714.mount", but ignoring.
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.410382 27751 manager.go:901] ignoring container "/system.slice/run-docker-netns-9a5667b14714.mount"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.410390 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-7849cd5bef6ac97e51cb3dff128a214cd542fe2d505cf379e8cacea700428f5.mount"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.410399 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-7849cd5bef6ac97e5e1cb3dff18a214cd542fe2d505cf379e8cacea700428f5.mount", but ignoring.
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.410408 27751 manager.go:901] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-7849cd5bef6ac97e5e1cb3dff128a214cd542fe2d505cf79e8cacea700428f5.mount"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.412660 27751 kuberuntime_manager.go:654] Determined the ip "10.32.0.6" for pod "snginx-16_default(0e4740f5-c9ab-11e7-89f4-c6b053eac242)" afte sandbox changed
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.412782 27751 kuberuntime_manager.go:705] Creating container &Container{Name:nginx,Image:nginx,Command:[],Args:[],WorkingDir:,Ports:[{ 0 80 TC }],Env:[],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[{default-token-qjbsf true /var/run/secrets/kubernetes.io/serviceaccount <nil>}],LivenessProb:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[],TerminationMessagePolic:File,} in pod snginx-16_default(0e4740f5-c9ab-11e7-89f4-c6b053eac242)
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.414115 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"snginx-16", UID:"0e4740f5-c9ab-11e7-89f4-c6b053ea242", APIVersion:"v1", ResourceVersion:"1320", FieldPath:"spec.containers{nginx}"}): type: 'Normal' reason: 'Pulling' pulling image "nginx"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.546967 27751 config.go:282] Setting pods for source api
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.548807 27751 config.go:404] Receiving a new pod "snginx-6_default(10b34ac6-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.549285 27751 kubelet.go:1837] SyncLoop (ADD, "api"): "snginx-6_default(10b34ac6-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.549443 27751 kubelet_pods.go:1284] Generating status for "snginx-6_default(10b34ac6-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.549790 27751 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.552868 27751 factory.go:116] Factory "docker" was unable to handle container "/kubepods/besteffort/pod10b34ac6-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.552899 27751 factory.go:105] Error trying to work out if we can handle /kubepods/besteffort/pod10b34ac6-c9ab-11e7-89f4-c6b053eac242: /kubepod/besteffort/pod10b34ac6-c9ab-11e7-89f4-c6b053eac242 not handled by systemd handler
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.552924 27751 factory.go:116] Factory "systemd" was unable to handle container "/kubepods/besteffort/pod10b34ac6-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.552933 27751 factory.go:112] Using factory "raw" for container "/kubepods/besteffort/pod10b34ac6-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.553123 27751 manager.go:932] Added container: "/kubepods/besteffort/pod10b34ac6-c9ab-11e7-89f4-c6b053eac242" (aliases: [], namespace: "")
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.553207 27751 handler.go:325] Added event &{/kubepods/besteffort/pod10b34ac6-c9ab-11e7-89f4-c6b053eac242 2017-11-15 02:17:01.552194711 +0000 UC containerCreation {<nil>}}
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.553251 27751 container.go:409] Start housekeeping for container "/kubepods/besteffort/pod10b34ac6-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:01 af867b kubelet[27751]: E1115 02:17:01.553551 27751 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.557754 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "snginx-6_default(10b34ac6-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.567869 27751 config.go:282] Setting pods for source api
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.569746 27751 config.go:404] Receiving a new pod "snginx-4_default(10af02e8-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.569986 27751 kubelet.go:1837] SyncLoop (ADD, "api"): "snginx-4_default(10af02e8-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.570125 27751 kubelet_pods.go:1284] Generating status for "snginx-4_default(10af02e8-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.570363 27751 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.575222 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "snginx-4_default(10af02e8-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.576775 27751 factory.go:116] Factory "docker" was unable to handle container "/kubepods/besteffort/pod10af02e8-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.576815 27751 factory.go:105] Error trying to work out if we can handle /kubepods/besteffort/pod10af02e8-c9ab-11e7-89f4-c6b053eac242: /kubepod/besteffort/pod10af02e8-c9ab-11e7-89f4-c6b053eac242 not handled by systemd handler
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.576823 27751 factory.go:116] Factory "systemd" was unable to handle container "/kubepods/besteffort/pod10af02e8-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.576830 27751 factory.go:112] Using factory "raw" for container "/kubepods/besteffort/pod10af02e8-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.576977 27751 manager.go:932] Added container: "/kubepods/besteffort/pod10af02e8-c9ab-11e7-89f4-c6b053eac242" (aliases: [], namespace: "")
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.577082 27751 handler.go:325] Added event &{/kubepods/besteffort/pod10af02e8-c9ab-11e7-89f4-c6b053eac242 2017-11-15 02:17:01.574194711 +0000 UC containerCreation {<nil>}}
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.577113 27751 container.go:409] Start housekeeping for container "/kubepods/besteffort/pod10af02e8-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.638655 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-qjbsf" (UniqueName: "kuberetes.io/secret/10b34ac6-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-6" (UID: "10b34ac6-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.638740 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-qjbsf" (UniqueName: "kuberetes.io/secret/10af02e8-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-4" (UID: "10af02e8-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.639989 27751 config.go:282] Setting pods for source api
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.642003 27751 kubelet.go:1850] SyncLoop (RECONCILE, "api"): "snginx-6_default(10b34ac6-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.646324 27751 volume_manager.go:366] All volumes are attached and mounted for pod "snginx-1_default(106570f0-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.646352 27751 kuberuntime_manager.go:370] No sandbox for pod "snginx-1_default(106570f0-c9ab-11e7-89f4-c6b053eac242)" can be found. Need to strt a new one
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.646370 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:true CreateSandbox:true SandboxID: Attempt:0 NextInitContainerToStartnil ContainersToStart:[0] ContainersToKill:map[]} for pod "snginx-1_default(106570f0-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.646435 27751 kuberuntime_manager.go:565] SyncPod received new pod "snginx-1_default(106570f0-c9ab-11e7-89f4-c6b053eac242)", will create a sanbox for it
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.646446 27751 kuberuntime_manager.go:574] Stopping PodSandbox for "snginx-1_default(106570f0-c9ab-11e7-89f4-c6b053eac242)", will start new one
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.646468 27751 kuberuntime_manager.go:626] Creating sandbox for pod "snginx-1_default(106570f0-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.649147 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/besteffort/pod106570f0-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.649173 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/besteffort/pod106570f0-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.653936 27751 status_manager.go:451] Status for pod "snginx-6_default(10b34ac6-c9ab-11e7-89f4-c6b053eac242)" updated successfully: (1, {Phase:ending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:17:01 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTim:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:17:01 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [nginx]} {Type:PodScheduled Status:True LastProbeTie:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:17:01 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP: StartTime:2017-11-15 02:17:01 +0000 UTC InitContaineStatuses:[] ContainerStatuses:[{Name:nginx State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminted:nil} Ready:false RestartCount:0 Image:nginx ImageID: ContainerID:}] QOSClass:BestEffort})
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.733038 27751 config.go:282] Setting pods for source api
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.735321 27751 config.go:404] Receiving a new pod "snginx-11_default(10c857c2-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.736980 27751 kubelet.go:1837] SyncLoop (ADD, "api"): "snginx-11_default(10c857c2-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.737138 27751 kubelet_pods.go:1284] Generating status for "snginx-11_default(10c857c2-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.737429 27751 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.740486 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "default-token-qjbsf" (UniqueName: "kubernetes.io/secret/10b4ac6-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-6" (UID: "10b34ac6-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.740534 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "default-token-qjbsf" (UniqueName: "kubernetes.io/secret/10a02e8-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-4" (UID: "10af02e8-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.740577 27751 secret.go:186] Setting up volume default-token-qjbsf for pod 10af02e8-c9ab-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/10af0e8-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.740761 27751 empty_dir.go:264] pod 10af02e8-c9ab-11e7-89f4-c6b053eac242: mounting tmpfs for volume wrapped_default-token-qjbsf
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.740779 27751 mount_linux.go:135] Mounting cmd (systemd-run) with arguments ([--description=Kubernetes transient mount for /var/lib/kubelet/pos/10af02e8-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf --scope -- mount -t tmpfs tmpfs /var/lib/kubelet/pods/10af02e8-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetesio~secret/default-token-qjbsf])
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.746237 27751 status_manager.go:451] Status for pod "snginx-4_default(10af02e8-c9ab-11e7-89f4-c6b053eac242)" updated successfully: (1, {Phase:ending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:17:01 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTim:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:17:01 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [nginx]} {Type:PodScheduled Status:True LastProbeTie:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:17:01 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP: StartTime:2017-11-15 02:17:01 +0000 UTC InitContaineStatuses:[] ContainerStatuses:[{Name:nginx State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminted:nil} Ready:false RestartCount:0 Image:nginx ImageID: ContainerID:}] QOSClass:BestEffort})
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.752091 27751 secret.go:186] Setting up volume default-token-qjbsf for pod 10b34ac6-c9ab-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/10b34c6-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.752247 27751 empty_dir.go:264] pod 10b34ac6-c9ab-11e7-89f4-c6b053eac242: mounting tmpfs for volume wrapped_default-token-qjbsf
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.752266 27751 mount_linux.go:135] Mounting cmd (systemd-run) with arguments ([--description=Kubernetes transient mount for /var/lib/kubelet/pos/10b34ac6-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf --scope -- mount -t tmpfs tmpfs /var/lib/kubelet/pods/10b34ac6-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetesio~secret/default-token-qjbsf])
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.755385 27751 factory.go:116] Factory "docker" was unable to handle container "/kubepods/besteffort/pod10c857c2-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.755421 27751 factory.go:105] Error trying to work out if we can handle /kubepods/besteffort/pod10c857c2-c9ab-11e7-89f4-c6b053eac242: /kubepod/besteffort/pod10c857c2-c9ab-11e7-89f4-c6b053eac242 not handled by systemd handler
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.755433 27751 factory.go:116] Factory "systemd" was unable to handle container "/kubepods/besteffort/pod10c857c2-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.755441 27751 factory.go:112] Using factory "raw" for container "/kubepods/besteffort/pod10c857c2-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.755729 27751 manager.go:932] Added container: "/kubepods/besteffort/pod10c857c2-c9ab-11e7-89f4-c6b053eac242" (aliases: [], namespace: "")
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.755869 27751 handler.go:325] Added event &{/kubepods/besteffort/pod10c857c2-c9ab-11e7-89f4-c6b053eac242 2017-11-15 02:17:01.749194711 +0000 UC containerCreation {<nil>}}
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.755899 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/run-2103.scope"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.755908 27751 factory.go:105] Error trying to work out if we can handle /system.slice/run-2103.scope: /system.slice/run-2103.scope not handledby systemd handler
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.755914 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/run-2103.scope"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.755921 27751 factory.go:112] Using factory "raw" for container "/system.slice/run-2103.scope"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.756063 27751 manager.go:932] Added container: "/system.slice/run-2103.scope" (aliases: [], namespace: "")
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.766470 27751 container.go:409] Start housekeeping for container "/kubepods/besteffort/pod10c857c2-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.767242 27751 handler.go:325] Added event &{/system.slice/run-2103.scope 2017-11-15 02:17:01.749194711 +0000 UTC containerCreation {<nil>}}
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.767266 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-kubelet-pods-106570f0\\x2dc9ab\\x2d11e7\\2d89f4\\x2dc6b053eac242-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dqjbsf.mount"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.767280 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-kubelet-pods-106570f0\\x2dc9ab\\x2d11e7\\x2d89f4\\2dc6b053eac242-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dqjbsf.mount", but ignoring.
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.767292 27751 manager.go:901] ignoring container "/system.slice/var-lib-kubelet-pods-106570f0\\x2dc9ab\\x2d11e7\\x2d89f4\\x2dc6b053eac242-volues-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dqjbsf.mount"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.767317 27751 container.go:409] Start housekeeping for container "/system.slice/run-2103.scope"
Nov 15 02:17:01 af867b kubelet[27751]: E1115 02:17:01.767370 27751 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs
Nov 15 02:17:01 af867b kubelet[27751]: W1115 02:17:01.779138 27751 raw.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-2106.scope": 0x40000100 == IN_CREATE|IN_ISDIR): notify_add_watch /sys/fs/cgroup/blkio/system.slice/run-2106.scope: no such file or directory
Nov 15 02:17:01 af867b kubelet[27751]: W1115 02:17:01.779172 27751 raw.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-2106.scope": 0x40000100 == IN_CREATE|IN_ISDIR):inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-2106.scope: no such file or directory
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.780600 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "snginx-11_default(10c857c2-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.826200 27751 manager.go:989] Destroyed container: "/system.slice/run-2103.scope" (aliases: [], namespace: "")
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.826235 27751 handler.go:325] Added event &{/system.slice/run-2103.scope 2017-11-15 02:17:01.826224627 +0000 UTC containerDeletion {<nil>}}
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.826277 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/run-2106.scope"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.826289 27751 factory.go:105] Error trying to work out if we can handle /system.slice/run-2106.scope: /system.slice/run-2106.scope not handledby systemd handler
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.826295 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/run-2106.scope"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.826304 27751 factory.go:112] Using factory "raw" for container "/system.slice/run-2106.scope"
Nov 15 02:17:01 af867b kubelet[27751]: W1115 02:17:01.826410 27751 container.go:354] Failed to create summary reader for "/system.slice/run-2106.scope": none of the resources are being tracked.
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.826424 27751 manager.go:932] Added container: "/system.slice/run-2106.scope" (aliases: [], namespace: "")
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.826456 27751 handler.go:325] Added event &{/system.slice/run-2106.scope 0001-01-01 00:00:00 +0000 UTC containerCreation {<nil>}}
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.826475 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-kubelet-pods-10af02e8\\x2dc9ab\\x2d11e7\\2d89f4\\x2dc6b053eac242-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dqjbsf.mount"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.826487 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-kubelet-pods-10af02e8\\x2dc9ab\\x2d11e7\\x2d89f4\\2dc6b053eac242-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dqjbsf.mount", but ignoring.
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.826500 27751 manager.go:901] ignoring container "/system.slice/var-lib-kubelet-pods-10af02e8\\x2dc9ab\\x2d11e7\\x2d89f4\\x2dc6b053eac242-volues-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dqjbsf.mount"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.826517 27751 manager.go:989] Destroyed container: "/system.slice/run-2106.scope" (aliases: [], namespace: "")
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.826525 27751 handler.go:325] Added event &{/system.slice/run-2106.scope 2017-11-15 02:17:01.826522316 +0000 UTC containerDeletion {<nil>}}
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.826554 27751 container.go:409] Start housekeeping for container "/system.slice/run-2106.scope"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.826709 27751 config.go:282] Setting pods for source api
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.845433 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-qjbsf" (UniqueName: "kuberetes.io/secret/10c857c2-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-11" (UID: "10c857c2-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.845771 27751 kubelet.go:1850] SyncLoop (RECONCILE, "api"): "snginx-4_default(10af02e8-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.853670 27751 generic.go:146] GenericPLEG: 0eb57d14-c9ab-11e7-89f4-c6b053eac242/0fab8639e261263f063ff03cc306506a34283f24ab70834abda8a699113657c: non-existent -> running
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.853689 27751 generic.go:146] GenericPLEG: 0e4740f5-c9ab-11e7-89f4-c6b053eac242/66b9c434d36eb927e5ef517d07f9a34fc5e788400450810f5065e3e2ce1a0d7: non-existent -> running
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.855863 27751 kuberuntime_manager.go:833] getSandboxIDByPodUID got sandbox IDs ["0fab8639e261263f063ff03cc306506a34283f24ab70834abda8a69911365cc"] for pod "snginx-28_default(0eb57d14-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.931909 27751 generic.go:345] PLEG: Write status for snginx-28/default: &container.PodStatus{ID:"0eb57d14-c9ab-11e7-89f4-c6b053eac242", Name:"nginx-28", Namespace:"default", IP:"10.32.0.5", ContainerStatuses:[]*container.ContainerStatus{}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc421be4000)}} (err: <nil>
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.932893 27751 kubelet.go:1871] SyncLoop (PLEG): "snginx-28_default(0eb57d14-c9ab-11e7-89f4-c6b053eac242)", event: &pleg.PodLifecycleEvent{ID:"eb57d14-c9ab-11e7-89f4-c6b053eac242", Type:"ContainerStarted", Data:"0fab8639e261263f063ff03cc306506a34283f24ab70834abda8a699113657cc"}
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.934864 27751 kuberuntime_manager.go:833] getSandboxIDByPodUID got sandbox IDs ["66b9c434d36eb927e5ef517d07f9a34fc5e788400450810f5065e3e2ce1a007"] for pod "snginx-16_default(0e4740f5-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.946525 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "default-token-qjbsf" (UniqueName: "kubernetes.io/secret/10c57c2-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-11" (UID: "10c857c2-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.969078 27751 secret.go:186] Setting up volume default-token-qjbsf for pod 10c857c2-c9ab-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/10c85c2-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.971971 27751 empty_dir.go:264] pod 10c857c2-c9ab-11e7-89f4-c6b053eac242: mounting tmpfs for volume wrapped_default-token-qjbsf
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.972009 27751 mount_linux.go:135] Mounting cmd (systemd-run) with arguments ([--description=Kubernetes transient mount for /var/lib/kubelet/pos/10c857c2-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf --scope -- mount -t tmpfs tmpfs /var/lib/kubelet/pods/10c857c2-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetesio~secret/default-token-qjbsf])
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.983179 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/run-2123.scope"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.983270 27751 factory.go:105] Error trying to work out if we can handle /system.slice/run-2123.scope: /system.slice/run-2123.scope not handledby systemd handler
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.983278 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/run-2123.scope"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.983286 27751 factory.go:112] Using factory "raw" for container "/system.slice/run-2123.scope"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.983540 27751 manager.go:932] Added container: "/system.slice/run-2123.scope" (aliases: [], namespace: "")
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.990264 27751 handler.go:325] Added event &{/system.slice/run-2123.scope 2017-11-15 02:17:01.980194711 +0000 UTC containerCreation {<nil>}}
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.990314 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-kubelet-pods-10b34ac6\\x2dc9ab\\x2d11e7\\2d89f4\\x2dc6b053eac242-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dqjbsf.mount"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.990333 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-kubelet-pods-10b34ac6\\x2dc9ab\\x2d11e7\\x2d89f4\\2dc6b053eac242-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dqjbsf.mount", but ignoring.
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.990345 27751 manager.go:901] ignoring container "/system.slice/var-lib-kubelet-pods-10b34ac6\\x2dc9ab\\x2d11e7\\x2d89f4\\x2dc6b053eac242-volues-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dqjbsf.mount"
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.990371 27751 container.go:409] Start housekeeping for container "/system.slice/run-2123.scope"
Nov 15 02:17:01 af867b kubelet[27751]: W1115 02:17:01.990621 27751 container.go:367] Failed to get RecentStats("/system.slice/run-2123.scope") while determining the next housekeeping: unable to fnd data for container /system.slice/run-2123.scope
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.991062 27751 manager.go:989] Destroyed container: "/system.slice/run-2123.scope" (aliases: [], namespace: "")
Nov 15 02:17:01 af867b kubelet[27751]: I1115 02:17:01.991092 27751 handler.go:325] Added event &{/system.slice/run-2123.scope 2017-11-15 02:17:01.991084523 +0000 UTC containerDeletion {<nil>}}
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.004208 27751 generic.go:345] PLEG: Write status for snginx-16/default: &container.PodStatus{ID:"0e4740f5-c9ab-11e7-89f4-c6b053eac242", Name:"nginx-16", Namespace:"default", IP:"10.32.0.6", ContainerStatuses:[]*container.ContainerStatus{}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc420d76000)}} (err: <nil>
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.004264 27751 kubelet.go:1871] SyncLoop (PLEG): "snginx-16_default(0e4740f5-c9ab-11e7-89f4-c6b053eac242)", event: &pleg.PodLifecycleEvent{ID:"e4740f5-c9ab-11e7-89f4-c6b053eac242", Type:"ContainerStarted", Data:"66b9c434d36eb927e5ef517d07f9a34fc5e788400450810f5065e3e2ce1a0d07"}
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.017809 27751 config.go:282] Setting pods for source api
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.018836 27751 secret.go:217] Received secret default/default-token-qjbsf containing (3) pieces of data, 1878 total bytes
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.018931 27751 atomic_writer.go:145] pod default/snginx-4 volume default-token-qjbsf: write required for target directory /var/lib/kubelet/pods10af02e8-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.019035 27751 atomic_writer.go:160] pod default/snginx-4 volume default-token-qjbsf: performed write of new data to ts data directory: /var/li/kubelet/pods/10af02e8-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf/..119811_15_11_02_17_02.062005300
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.019140 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "default-token-qjbsf" (UniqueName: "kubernetes.io/secret/10af2e8-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-4" (UID: "10af02e8-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.019422 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"snginx-4", UID:"10af02e8-c9ab-11e7-89f4-c6b053eac42", APIVersion:"v1", ResourceVersion:"1345", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "default-token-qjbsf"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.066879 27751 config.go:404] Receiving a new pod "snginx-33_default(10d84c3d-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.070139 27751 kubelet.go:1837] SyncLoop (ADD, "api"): "snginx-33_default(10d84c3d-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.070283 27751 kubelet_pods.go:1284] Generating status for "snginx-33_default(10d84c3d-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.070556 27751 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.082417 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-qjbsf" (UniqueName: "kuberetes.io/secret/10d84c3d-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-33" (UID: "10d84c3d-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.082526 27751 factory.go:116] Factory "docker" was unable to handle container "/kubepods/besteffort/pod10d84c3d-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.082541 27751 factory.go:105] Error trying to work out if we can handle /kubepods/besteffort/pod10d84c3d-c9ab-11e7-89f4-c6b053eac242: /kubepod/besteffort/pod10d84c3d-c9ab-11e7-89f4-c6b053eac242 not handled by systemd handler
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.082547 27751 factory.go:116] Factory "systemd" was unable to handle container "/kubepods/besteffort/pod10d84c3d-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.082554 27751 factory.go:112] Using factory "raw" for container "/kubepods/besteffort/pod10d84c3d-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.082698 27751 manager.go:932] Added container: "/kubepods/besteffort/pod10d84c3d-c9ab-11e7-89f4-c6b053eac242" (aliases: [], namespace: "")
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.082836 27751 handler.go:325] Added event &{/kubepods/besteffort/pod10d84c3d-c9ab-11e7-89f4-c6b053eac242 2017-11-15 02:17:02.080194711 +0000 UC containerCreation {<nil>}}
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.082866 27751 container.go:409] Start housekeeping for container "/kubepods/besteffort/pod10d84c3d-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.087609 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "snginx-33_default(10d84c3d-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.133079 27751 status_manager.go:451] Status for pod "snginx-11_default(10c857c2-c9ab-11e7-89f4-c6b053eac242)" updated successfully: (1, {PhasePending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:17:01 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTie:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:17:01 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [nginx]} {Type:PodScheduled Status:True LastProbeTme:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:17:01 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP: StartTime:2017-11-15 02:17:01 +0000 UTC InitContainrStatuses:[] ContainerStatuses:[{Name:nginx State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Termiated:nil} Ready:false RestartCount:0 Image:nginx ImageID: ContainerID:}] QOSClass:BestEffort})
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.133393 27751 config.go:282] Setting pods for source api
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.140131 27751 secret.go:217] Received secret default/default-token-qjbsf containing (3) pieces of data, 1878 total bytes
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.140218 27751 atomic_writer.go:145] pod default/snginx-6 volume default-token-qjbsf: write required for target directory /var/lib/kubelet/pods10b34ac6-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.140305 27751 atomic_writer.go:160] pod default/snginx-6 volume default-token-qjbsf: performed write of new data to ts data directory: /var/li/kubelet/pods/10b34ac6-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf/..119811_15_11_02_17_02.321763843
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.140396 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "default-token-qjbsf" (UniqueName: "kubernetes.io/secret/10b3ac6-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-6" (UID: "10b34ac6-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.140429 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"snginx-6", UID:"10b34ac6-c9ab-11e7-89f4-c6b053eac42", APIVersion:"v1", ResourceVersion:"1344", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "default-token-qjbsf"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.145178 27751 kubelet.go:1850] SyncLoop (RECONCILE, "api"): "snginx-11_default(10c857c2-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.148820 27751 secret.go:217] Received secret default/default-token-qjbsf containing (3) pieces of data, 1878 total bytes
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.148879 27751 atomic_writer.go:145] pod default/snginx-11 volume default-token-qjbsf: write required for target directory /var/lib/kubelet/pod/10c857c2-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.148955 27751 atomic_writer.go:160] pod default/snginx-11 volume default-token-qjbsf: performed write of new data to ts data directory: /var/lb/kubelet/pods/10c857c2-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf/..119811_15_11_02_17_02.926896518
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.149031 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "default-token-qjbsf" (UniqueName: "kubernetes.io/secret/10c87c2-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-11" (UID: "10c857c2-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.149055 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"snginx-11", UID:"10c857c2-c9ab-11e7-89f4-c6b053ea242", APIVersion:"v1", ResourceVersion:"1349", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "default-token-qjbsf"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.158772 27751 volume_manager.go:366] All volumes are attached and mounted for pod "snginx-6_default(10b34ac6-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.158797 27751 kuberuntime_manager.go:370] No sandbox for pod "snginx-6_default(10b34ac6-c9ab-11e7-89f4-c6b053eac242)" can be found. Need to strt a new one
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.158809 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:true CreateSandbox:true SandboxID: Attempt:0 NextInitContainerToStartnil ContainersToStart:[0] ContainersToKill:map[]} for pod "snginx-6_default(10b34ac6-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.158844 27751 kuberuntime_manager.go:565] SyncPod received new pod "snginx-6_default(10b34ac6-c9ab-11e7-89f4-c6b053eac242)", will create a sanbox for it
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.158852 27751 kuberuntime_manager.go:574] Stopping PodSandbox for "snginx-6_default(10b34ac6-c9ab-11e7-89f4-c6b053eac242)", will start new one
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.158887 27751 kuberuntime_manager.go:626] Creating sandbox for pod "snginx-6_default(10b34ac6-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.167769 27751 config.go:282] Setting pods for source api
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.169058 27751 config.go:404] Receiving a new pod "snginx-24_default(10e8f77b-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.172804 27751 kubelet.go:1837] SyncLoop (ADD, "api"): "snginx-24_default(10e8f77b-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.172949 27751 kubelet_pods.go:1284] Generating status for "snginx-24_default(10e8f77b-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.173227 27751 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.180726 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "snginx-24_default(10e8f77b-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.181494 27751 volume_manager.go:366] All volumes are attached and mounted for pod "snginx-4_default(10af02e8-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.181512 27751 kuberuntime_manager.go:370] No sandbox for pod "snginx-4_default(10af02e8-c9ab-11e7-89f4-c6b053eac242)" can be found. Need to strt a new one
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.181522 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:true CreateSandbox:true SandboxID: Attempt:0 NextInitContainerToStartnil ContainersToStart:[0] ContainersToKill:map[]} for pod "snginx-4_default(10af02e8-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.181555 27751 kuberuntime_manager.go:565] SyncPod received new pod "snginx-4_default(10af02e8-c9ab-11e7-89f4-c6b053eac242)", will create a sanbox for it
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.181563 27751 kuberuntime_manager.go:574] Stopping PodSandbox for "snginx-4_default(10af02e8-c9ab-11e7-89f4-c6b053eac242)", will start new one
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.181579 27751 kuberuntime_manager.go:626] Creating sandbox for pod "snginx-4_default(10af02e8-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.186806 27751 factory.go:116] Factory "docker" was unable to handle container "/kubepods/besteffort/pod10e8f77b-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.186826 27751 factory.go:105] Error trying to work out if we can handle /kubepods/besteffort/pod10e8f77b-c9ab-11e7-89f4-c6b053eac242: /kubepod/besteffort/pod10e8f77b-c9ab-11e7-89f4-c6b053eac242 not handled by systemd handler
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.186834 27751 factory.go:116] Factory "systemd" was unable to handle container "/kubepods/besteffort/pod10e8f77b-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.186854 27751 factory.go:112] Using factory "raw" for container "/kubepods/besteffort/pod10e8f77b-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.187031 27751 manager.go:932] Added container: "/kubepods/besteffort/pod10e8f77b-c9ab-11e7-89f4-c6b053eac242" (aliases: [], namespace: "")
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.187148 27751 handler.go:325] Added event &{/kubepods/besteffort/pod10e8f77b-c9ab-11e7-89f4-c6b053eac242 2017-11-15 02:17:02.180194711 +0000 UC containerCreation {<nil>}}
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.187197 27751 container.go:409] Start housekeeping for container "/kubepods/besteffort/pod10e8f77b-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.188509 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-qjbsf" (UniqueName: "kuberetes.io/secret/10e8f77b-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-24" (UID: "10e8f77b-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.194180 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "default-token-qjbsf" (UniqueName: "kubernetes.io/secret/10d4c3d-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-33" (UID: "10d84c3d-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.194233 27751 secret.go:186] Setting up volume default-token-qjbsf for pod 10d84c3d-c9ab-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/10d843d-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.194379 27751 empty_dir.go:264] pod 10d84c3d-c9ab-11e7-89f4-c6b053eac242: mounting tmpfs for volume wrapped_default-token-qjbsf
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.194397 27751 mount_linux.go:135] Mounting cmd (systemd-run) with arguments ([--description=Kubernetes transient mount for /var/lib/kubelet/pos/10d84c3d-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf --scope -- mount -t tmpfs tmpfs /var/lib/kubelet/pods/10d84c3d-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetesio~secret/default-token-qjbsf])
Nov 15 02:17:02 af867b kubelet[27751]: W1115 02:17:02.253782 27751 raw.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-2136.scope": 0x40000100 == IN_CREATE|IN_ISDIR): notify_add_watch /sys/fs/cgroup/blkio/system.slice/run-2136.scope: no such file or directory
Nov 15 02:17:02 af867b kubelet[27751]: W1115 02:17:02.253845 27751 raw.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-2136.scope": 0x40000100 == IN_CREATE|IN_ISDIR):inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-2136.scope: no such file or directory
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.257012 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/run-2136.scope"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.258110 27751 factory.go:105] Error trying to work out if we can handle /system.slice/run-2136.scope: /system.slice/run-2136.scope not handledby systemd handler
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.258727 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/run-2136.scope"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.258745 27751 factory.go:112] Using factory "raw" for container "/system.slice/run-2136.scope"
Nov 15 02:17:02 af867b kubelet[27751]: W1115 02:17:02.260229 27751 container.go:354] Failed to create summary reader for "/system.slice/run-2136.scope": none of the resources are being tracked.
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.260249 27751 manager.go:932] Added container: "/system.slice/run-2136.scope" (aliases: [], namespace: "")
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.260293 27751 handler.go:325] Added event &{/system.slice/run-2136.scope 0001-01-01 00:00:00 +0000 UTC containerCreation {<nil>}}
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.261699 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-kubelet-pods-10c857c2\\x2dc9ab\\x2d11e7\\2d89f4\\x2dc6b053eac242-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dqjbsf.mount"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.261713 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-kubelet-pods-10c857c2\\x2dc9ab\\x2d11e7\\x2d89f4\\2dc6b053eac242-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dqjbsf.mount", but ignoring.
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.261724 27751 manager.go:901] ignoring container "/system.slice/var-lib-kubelet-pods-10c857c2\\x2dc9ab\\x2d11e7\\x2d89f4\\x2dc6b053eac242-volues-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dqjbsf.mount"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.263414 27751 manager.go:989] Destroyed container: "/system.slice/run-2136.scope" (aliases: [], namespace: "")
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.264767 27751 handler.go:325] Added event &{/system.slice/run-2136.scope 2017-11-15 02:17:02.263422817 +0000 UTC containerDeletion {<nil>}}
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.264844 27751 container.go:409] Start housekeeping for container "/system.slice/run-2136.scope"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.268922 27751 config.go:282] Setting pods for source api
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.274985 27751 config.go:404] Receiving a new pod "snginx-30_default(110c8f01-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.275353 27751 kubelet.go:1837] SyncLoop (ADD, "api"): "snginx-30_default(110c8f01-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.275498 27751 kubelet_pods.go:1284] Generating status for "snginx-30_default(110c8f01-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.277228 27751 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.279496 27751 factory.go:116] Factory "docker" was unable to handle container "/kubepods/besteffort/pod110c8f01-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.279515 27751 factory.go:105] Error trying to work out if we can handle /kubepods/besteffort/pod110c8f01-c9ab-11e7-89f4-c6b053eac242: /kubepod/besteffort/pod110c8f01-c9ab-11e7-89f4-c6b053eac242 not handled by systemd handler
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.279521 27751 factory.go:116] Factory "systemd" was unable to handle container "/kubepods/besteffort/pod110c8f01-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.279528 27751 factory.go:112] Using factory "raw" for container "/kubepods/besteffort/pod110c8f01-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.282086 27751 manager.go:932] Added container: "/kubepods/besteffort/pod110c8f01-c9ab-11e7-89f4-c6b053eac242" (aliases: [], namespace: "")
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.282155 27751 handler.go:325] Added event &{/kubepods/besteffort/pod110c8f01-c9ab-11e7-89f4-c6b053eac242 2017-11-15 02:17:02.278194711 +0000 UC containerCreation {<nil>}}
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.282842 27751 container.go:409] Start housekeeping for container "/kubepods/besteffort/pod110c8f01-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:02 af867b kubelet[27751]: E1115 02:17:02.282905 27751 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.293849 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "snginx-30_default(110c8f01-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.294075 27751 status_manager.go:451] Status for pod "snginx-33_default(10d84c3d-c9ab-11e7-89f4-c6b053eac242)" updated successfully: (1, {PhasePending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:17:02 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTie:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:17:02 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [nginx]} {Type:PodScheduled Status:True LastProbeTme:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:17:01 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP: StartTime:2017-11-15 02:17:02 +0000 UTC InitContainrStatuses:[] ContainerStatuses:[{Name:nginx State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Termiated:nil} Ready:false RestartCount:0 Image:nginx ImageID: ContainerID:}] QOSClass:BestEffort})
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.297965 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-qjbsf" (UniqueName: "kuberetes.io/secret/110c8f01-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-30" (UID: "110c8f01-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.298058 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "default-token-qjbsf" (UniqueName: "kubernetes.io/secret/10ef77b-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-24" (UID: "10e8f77b-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.298108 27751 secret.go:186] Setting up volume default-token-qjbsf for pod 10e8f77b-c9ab-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/10e8f7b-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.298245 27751 empty_dir.go:264] pod 10e8f77b-c9ab-11e7-89f4-c6b053eac242: mounting tmpfs for volume wrapped_default-token-qjbsf
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.298263 27751 mount_linux.go:135] Mounting cmd (systemd-run) with arguments ([--description=Kubernetes transient mount for /var/lib/kubelet/pos/10e8f77b-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf --scope -- mount -t tmpfs tmpfs /var/lib/kubelet/pods/10e8f77b-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetesio~secret/default-token-qjbsf])
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.324295 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/run-2146.scope"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.324331 27751 factory.go:105] Error trying to work out if we can handle /system.slice/run-2146.scope: /system.slice/run-2146.scope not handledby systemd handler
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.324337 27751 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/run-2146.scope"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.324345 27751 factory.go:112] Using factory "raw" for container "/system.slice/run-2146.scope"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.327697 27751 manager.go:932] Added container: "/system.slice/run-2146.scope" (aliases: [], namespace: "")
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.328651 27751 handler.go:325] Added event &{/system.slice/run-2146.scope 2017-11-15 02:17:02.322194711 +0000 UTC containerCreation {<nil>}}
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.329445 27751 container.go:409] Start housekeeping for container "/system.slice/run-2146.scope"
Nov 15 02:17:02 af867b kubelet[27751]: E1115 02:17:02.336789 27751 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs
Nov 15 02:17:02 af867b kubelet[27751]: W1115 02:17:02.339945 27751 raw.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-2146.scope": 0x40000100 == IN_CREATE|IN_ISDIR): eaddirent: no such file or directory
Nov 15 02:17:02 af867b kubelet[27751]: W1115 02:17:02.340006 27751 raw.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-2146.scope": 0x40000100 == IN_CREATE|IN_ISDIR):inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-2146.scope: no such file or directory
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.361405 27751 config.go:282] Setting pods for source api
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.364513 27751 kubelet.go:1850] SyncLoop (RECONCILE, "api"): "snginx-33_default(10d84c3d-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.364560 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-kubelet-pods-10d84c3d\\x2dc9ab\\x2d11e7\\2d89f4\\x2dc6b053eac242-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dqjbsf.mount"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.364574 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-kubelet-pods-10d84c3d\\x2dc9ab\\x2d11e7\\x2d89f4\\2dc6b053eac242-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dqjbsf.mount", but ignoring.
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.364586 27751 manager.go:901] ignoring container "/system.slice/var-lib-kubelet-pods-10d84c3d\\x2dc9ab\\x2d11e7\\x2d89f4\\x2dc6b053eac242-volues-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dqjbsf.mount"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.366337 27751 manager.go:989] Destroyed container: "/system.slice/run-2146.scope" (aliases: [], namespace: "")
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.366356 27751 handler.go:325] Added event &{/system.slice/run-2146.scope 2017-11-15 02:17:02.366350324 +0000 UTC containerDeletion {<nil>}}
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.381096 27751 volume_manager.go:366] All volumes are attached and mounted for pod "snginx-11_default(10c857c2-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.381117 27751 kuberuntime_manager.go:370] No sandbox for pod "snginx-11_default(10c857c2-c9ab-11e7-89f4-c6b053eac242)" can be found. Need to sart a new one
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.381131 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:true CreateSandbox:true SandboxID: Attempt:0 NextInitContainerToStartnil ContainersToStart:[0] ContainersToKill:map[]} for pod "snginx-11_default(10c857c2-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.381174 27751 kuberuntime_manager.go:565] SyncPod received new pod "snginx-11_default(10c857c2-c9ab-11e7-89f4-c6b053eac242)", will create a sadbox for it
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.381186 27751 kuberuntime_manager.go:574] Stopping PodSandbox for "snginx-11_default(10c857c2-c9ab-11e7-89f4-c6b053eac242)", will start new on
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.381244 27751 kuberuntime_manager.go:626] Creating sandbox for pod "snginx-11_default(10c857c2-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.411090 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "default-token-qjbsf" (UniqueName: "kubernetes.io/secret/1108f01-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-30" (UID: "110c8f01-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.411211 27751 secret.go:186] Setting up volume default-token-qjbsf for pod 110c8f01-c9ab-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/110c801-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.411357 27751 empty_dir.go:264] pod 110c8f01-c9ab-11e7-89f4-c6b053eac242: mounting tmpfs for volume wrapped_default-token-qjbsf
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.411375 27751 mount_linux.go:135] Mounting cmd (systemd-run) with arguments ([--description=Kubernetes transient mount for /var/lib/kubelet/pos/110c8f01-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf --scope -- mount -t tmpfs tmpfs /var/lib/kubelet/pods/110c8f01-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetesio~secret/default-token-qjbsf])
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.432348 27751 secret.go:217] Received secret default/default-token-qjbsf containing (3) pieces of data, 1878 total bytes
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.432441 27751 atomic_writer.go:145] pod default/snginx-33 volume default-token-qjbsf: write required for target directory /var/lib/kubelet/pod/10d84c3d-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.432532 27751 atomic_writer.go:160] pod default/snginx-33 volume default-token-qjbsf: performed write of new data to ts data directory: /var/lb/kubelet/pods/10d84c3d-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf/..119811_15_11_02_17_02.827992365
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.432618 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "default-token-qjbsf" (UniqueName: "kubernetes.io/secret/10d8c3d-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-33" (UID: "10d84c3d-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.432652 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"snginx-33", UID:"10d84c3d-c9ab-11e7-89f4-c6b053ea242", APIVersion:"v1", ResourceVersion:"1353", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "default-token-qjbsf"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.432737 27751 request.go:462] Throttling request took 108.438482ms, request: GET:https://10.241.226.117:6443/api/v1/namespaces/default/secretsdefault-token-qjbsf
Nov 15 02:17:02 af867b kubelet[27751]: W1115 02:17:02.434147 27751 raw.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-2151.scope": 0x40000100 == IN_CREATE|IN_ISIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-2151.scope: no such file or directory
Nov 15 02:17:02 af867b kubelet[27751]: W1115 02:17:02.434181 27751 raw.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-2151.scope": 0x40000100 == IN_CREATE|IN_ISDIR): notify_add_watch /sys/fs/cgroup/blkio/system.slice/run-2151.scope: no such file or directory
Nov 15 02:17:02 af867b kubelet[27751]: W1115 02:17:02.434204 27751 raw.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-2151.scope": 0x40000100 == IN_CREATE|IN_ISDIR):inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-2151.scope: no such file or directory
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.434336 27751 factory.go:116] Factory "docker" was unable to handle container "/system.slice/var-lib-kubelet-pods-10e8f77b\\x2dc9ab\\x2d11e7\\2d89f4\\x2dc6b053eac242-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dqjbsf.mount"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.434353 27751 factory.go:109] Factory "systemd" can handle container "/system.slice/var-lib-kubelet-pods-10e8f77b\\x2dc9ab\\x2d11e7\\x2d89f4\\2dc6b053eac242-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dqjbsf.mount", but ignoring.
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.434366 27751 manager.go:901] ignoring container "/system.slice/var-lib-kubelet-pods-10e8f77b\\x2dc9ab\\x2d11e7\\x2d89f4\\x2dc6b053eac242-volues-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dqjbsf.mount"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.470230 27751 secret.go:217] Received secret default/default-token-qjbsf containing (3) pieces of data, 1878 total bytes
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.470312 27751 atomic_writer.go:145] pod default/snginx-24 volume default-token-qjbsf: write required for target directory /var/lib/kubelet/pod/10e8f77b-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.470388 27751 atomic_writer.go:160] pod default/snginx-24 volume default-token-qjbsf: performed write of new data to ts data directory: /var/lb/kubelet/pods/10e8f77b-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf/..119811_15_11_02_17_02.323676328
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.470471 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "default-token-qjbsf" (UniqueName: "kubernetes.io/secret/10e877b-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-24" (UID: "10e8f77b-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.470497 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"snginx-24", UID:"10e8f77b-c9ab-11e7-89f4-c6b053ea242", APIVersion:"v1", ResourceVersion:"1357", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "default-token-qjbsf"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.480916 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/besteffort/pod10c857c2-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.480937 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/besteffort/pod10c857c2-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.481276 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/besteffort/pod10b34ac6-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.481288 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/besteffort/pod10b34ac6-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.481577 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/besteffort/pod10af02e8-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.481592 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/besteffort/pod10af02e8-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.482007 27751 volume_manager.go:366] All volumes are attached and mounted for pod "snginx-24_default(10e8f77b-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.482027 27751 kuberuntime_manager.go:370] No sandbox for pod "snginx-24_default(10e8f77b-c9ab-11e7-89f4-c6b053eac242)" can be found. Need to sart a new one
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.482037 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:true CreateSandbox:true SandboxID: Attempt:0 NextInitContainerToStartnil ContainersToStart:[0] ContainersToKill:map[]} for pod "snginx-24_default(10e8f77b-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.482095 27751 kuberuntime_manager.go:565] SyncPod received new pod "snginx-24_default(10e8f77b-c9ab-11e7-89f4-c6b053eac242)", will create a sadbox for it
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.482107 27751 kuberuntime_manager.go:574] Stopping PodSandbox for "snginx-24_default(10e8f77b-c9ab-11e7-89f4-c6b053eac242)", will start new on
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.482126 27751 kuberuntime_manager.go:626] Creating sandbox for pod "snginx-24_default(10e8f77b-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.488052 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/besteffort/pod10e8f77b-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.488068 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/besteffort/pod10e8f77b-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.523810 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.589583 27751 config.go:282] Setting pods for source api
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.591369 27751 config.go:404] Receiving a new pod "snginx-47_default(114df326-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.615016 27751 request.go:462] Throttling request took 180.918686ms, request: GET:https://10.241.226.117:6443/api/v1/namespaces/default/secretsdefault-token-qjbsf
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.723653 27751 volume_manager.go:366] All volumes are attached and mounted for pod "snginx-33_default(10d84c3d-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.723796 27751 kuberuntime_manager.go:370] No sandbox for pod "snginx-33_default(10d84c3d-c9ab-11e7-89f4-c6b053eac242)" can be found. Need to sart a new one
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.723818 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:true CreateSandbox:true SandboxID: Attempt:0 NextInitContainerToStartnil ContainersToStart:[0] ContainersToKill:map[]} for pod "snginx-33_default(10d84c3d-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.727736 27751 kuberuntime_manager.go:565] SyncPod received new pod "snginx-33_default(10d84c3d-c9ab-11e7-89f4-c6b053eac242)", will create a sadbox for it
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.727822 27751 kuberuntime_manager.go:574] Stopping PodSandbox for "snginx-33_default(10d84c3d-c9ab-11e7-89f4-c6b053eac242)", will start new on
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.727859 27751 kuberuntime_manager.go:626] Creating sandbox for pod "snginx-33_default(10d84c3d-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.807844 27751 secret.go:217] Received secret default/default-token-qjbsf containing (3) pieces of data, 1878 total bytes
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.811009 27751 atomic_writer.go:145] pod default/snginx-30 volume default-token-qjbsf: write required for target directory /var/lib/kubelet/pod/110c8f01-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.811125 27751 atomic_writer.go:160] pod default/snginx-30 volume default-token-qjbsf: performed write of new data to ts data directory: /var/lb/kubelet/pods/110c8f01-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf/..119811_15_11_02_17_02.735224807
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.811238 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "default-token-qjbsf" (UniqueName: "kubernetes.io/secret/110cf01-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-30" (UID: "110c8f01-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.811383 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"snginx-30", UID:"110c8f01-c9ab-11e7-89f4-c6b053ea242", APIVersion:"v1", ResourceVersion:"1359", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "default-token-qjbsf"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.811737 27751 request.go:462] Throttling request took 338.947459ms, request: PUT:https://10.241.226.117:6443/api/v1/namespaces/default/pods/sninx-24/status
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.867181 27751 config.go:282] Setting pods for source api
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.869949 27751 config.go:404] Receiving a new pod "snginx-32_default(1164e65d-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.870197 27751 config.go:282] Setting pods for source api
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.875453 27751 status_manager.go:451] Status for pod "snginx-24_default(10e8f77b-c9ab-11e7-89f4-c6b053eac242)" updated successfully: (1, {PhasePending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:17:02 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTie:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:17:02 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [nginx]} {Type:PodScheduled Status:True LastProbeTme:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:17:02 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP: StartTime:2017-11-15 02:17:02 +0000 UTC InitContainrStatuses:[] ContainerStatuses:[{Name:nginx State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Termiated:nil} Ready:false RestartCount:0 Image:nginx ImageID: ContainerID:}] QOSClass:BestEffort})
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.877578 27751 config.go:404] Receiving a new pod "snginx-52_default(1167792f-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.883526 27751 config.go:282] Setting pods for source api
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.903207 27751 volume_manager.go:366] All volumes are attached and mounted for pod "snginx-30_default(110c8f01-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.903231 27751 kuberuntime_manager.go:370] No sandbox for pod "snginx-30_default(110c8f01-c9ab-11e7-89f4-c6b053eac242)" can be found. Need to sart a new one
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.903244 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:true CreateSandbox:true SandboxID: Attempt:0 NextInitContainerToStartnil ContainersToStart:[0] ContainersToKill:map[]} for pod "snginx-30_default(110c8f01-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.903290 27751 kuberuntime_manager.go:565] SyncPod received new pod "snginx-30_default(110c8f01-c9ab-11e7-89f4-c6b053eac242)", will create a sadbox for it
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.903301 27751 kuberuntime_manager.go:574] Stopping PodSandbox for "snginx-30_default(110c8f01-c9ab-11e7-89f4-c6b053eac242)", will start new on
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.903323 27751 kuberuntime_manager.go:626] Creating sandbox for pod "snginx-30_default(110c8f01-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.960136 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 2379, Path: /health
Nov 15 02:17:02 af867b kubelet[27751]: I1115 02:17:02.960226 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.014792 27751 request.go:462] Throttling request took 137.359425ms, request: GET:https://10.241.226.117:6443/api/v1/namespaces/default/pods/sninx-30
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.114103 27751 config.go:282] Setting pods for source api
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.118014 27751 config.go:404] Receiving a new pod "snginx-12_default(1188bbdb-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.183941 27751 config.go:282] Setting pods for source api
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.186254 27751 config.go:404] Receiving a new pod "snginx-7_default(119ec189-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.197799 27751 prober.go:160] HTTP-Probe Host: http://10.32.0.2, Port: 10054, Path: /healthcheck/kubedns
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.197824 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.210998 27751 config.go:282] Setting pods for source api
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.213504 27751 config.go:404] Receiving a new pod "snginx-26_default(1195f2d8-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.214211 27751 request.go:462] Throttling request took 91.34839ms, request: PUT:https://10.241.226.117:6443/api/v1/namespaces/default/pods/sngix-30/status
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.214980 27751 config.go:282] Setting pods for source api
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.221679 27751 config.go:404] Receiving a new pod "snginx-27_default(11a56a59-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.222209 27751 config.go:282] Setting pods for source api
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.226056 27751 config.go:404] Receiving a new pod "snginx-8_default(11946462-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.246288 27751 http.go:96] Probe succeeded for http://127.0.0.1:2379/health, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:1:03 GMT] Content-Length:[18] Content-Type:[text/plain; charset=utf-8]] 0xc420f64ea0 18 [] true false map[] 0xc420a30700 <nil>}
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.246336 27751 prober.go:113] Liveness probe for "etcd-af867b_kube-system(d76e26fba3bf2bfd215eb29011d55250):etcd" succeeded
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.260857 27751 http.go:96] Probe succeeded for http://10.32.0.2:10054/healthcheck/kubedns, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15Nov 2017 02:17:03 GMT] Content-Length:[51] Content-Type:[application/json]] 0xc420f64fe0 51 [] true false map[] 0xc42271d200 <nil>}
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.260914 27751 prober.go:113] Liveness probe for "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242):kubedns" succeede
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.418917 27751 status_manager.go:451] Status for pod "snginx-30_default(110c8f01-c9ab-11e7-89f4-c6b053eac242)" updated successfully: (1, {PhasePending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:17:02 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTie:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:17:02 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [nginx]} {Type:PodScheduled Status:True LastProbeTme:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:17:02 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP: StartTime:2017-11-15 02:17:02 +0000 UTC InitContainrStatuses:[] ContainerStatuses:[{Name:nginx State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Termiated:nil} Ready:false RestartCount:0 Image:nginx ImageID: ContainerID:}] QOSClass:BestEffort})
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.460024 27751 config.go:282] Setting pods for source api
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.464102 27751 config.go:282] Setting pods for source api
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.466632 27751 config.go:404] Receiving a new pod "snginx-51_default(11c10f74-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.507058 27751 config.go:282] Setting pods for source api
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.510292 27751 config.go:404] Receiving a new pod "snginx-80_default(11be9038-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.582043 27751 config.go:282] Setting pods for source api
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.584747 27751 config.go:404] Receiving a new pod "snginx-31_default(11dedb1e-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.589345 27751 config.go:282] Setting pods for source api
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.594357 27751 config.go:404] Receiving a new pod "snginx-34_default(11e01340-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.620825 27751 prober.go:160] HTTP-Probe Host: https://127.0.0.1, Port: 6443, Path: /healthz
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.620858 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.645075 27751 http.go:96] Probe succeeded for https://127.0.0.1:6443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Type:[text/plain;charset=utf-8] Content-Length:[2] Date:[Wed, 15 Nov 2017 02:17:03 GMT]] 0xc42121d620 2 [] false false map[] 0xc420a31c00 0xc421b98a50}
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.645119 27751 prober.go:113] Liveness probe for "kube-apiserver-af867b_kube-system(4e0fac5dee63099d647b4d031a37ad7d):kube-apiserver" succeeded
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.657902 27751 config.go:282] Setting pods for source api
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.659829 27751 config.go:404] Receiving a new pod "snginx-57_default(11e0958a-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.708913 27751 config.go:282] Setting pods for source api
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.710643 27751 config.go:404] Receiving a new pod "snginx-50_default(11eaf831-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.754419 27751 config.go:282] Setting pods for source api
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.756893 27751 config.go:404] Receiving a new pod "snginx-19_default(11fb7e52-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.757623 27751 config.go:282] Setting pods for source api
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.760664 27751 config.go:404] Receiving a new pod "snginx-36_default(11fbf7a4-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.761607 27751 config.go:282] Setting pods for source api
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.763977 27751 config.go:404] Receiving a new pod "snginx-45_default(11fbce72-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.835400 27751 prober.go:160] HTTP-Probe Host: http://10.32.0.2, Port: 10054, Path: /healthcheck/dnsmasq
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.835437 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.837171 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/besteffort/pod110c8f01-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.837196 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/besteffort/pod110c8f01-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.839250 27751 http.go:96] Probe succeeded for http://10.32.0.2:10054/healthcheck/dnsmasq, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Type:application/json] Date:[Wed, 15 Nov 2017 02:17:03 GMT] Content-Length:[51]] 0xc421101ee0 51 [] true false map[] 0xc421178800 <nil>}
Nov 15 02:17:03 af867b kubelet[27751]: I1115 02:17:03.839301 27751 prober.go:113] Liveness probe for "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242):dnsmasq" succeede
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:03.998970 27751 config.go:282] Setting pods for source api
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.001614 27751 config.go:404] Receiving a new pod "snginx-62_default(121c97f0-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.005031 27751 factory.go:112] Using factory "docker" for container "/kubepods/besteffort/pod106570f0-c9ab-11e7-89f4-c6b053eac242/aef8399af06a53e2faec702ec96747200084cadab8b41510604cf1759a7983f"
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.009868 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/besteffort/pod10d84c3d-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.009886 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/besteffort/pod10d84c3d-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.010308 27751 docker_sandbox.go:691] Will attempt to re-write config file /var/lib/docker/containers/aef8399af06a593e2faec702ec96747200084cada8b41510604cf1759a7983f/resolv.conf with:
Nov 15 02:17:04 af867b kubelet[27751]: [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local opcwlaas.oraclecloud.internal. options ndots:5]
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.010455 27751 plugins.go:392] Calling network plugin cni to set up pod "snginx-1_default"
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.068181 27751 config.go:282] Setting pods for source api
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.071465 27751 config.go:404] Receiving a new pod "snginx-22_default(1222faad-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.084970 27751 cni.go:326] Got netns path /proc/2206/ns/net
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.084983 27751 cni.go:327] Using netns path default
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.085081 27751 cni.go:298] About to add CNI network cni-loopback (type=loopback)
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.095442 27751 manager.go:932] Added container: "/kubepods/besteffort/pod106570f0-c9ab-11e7-89f4-c6b053eac242/aef8399af06a593e2faec702ec9674720084cadab8b41510604cf1759a7983f" (aliases: [k8s_POD_snginx-1_default_106570f0-c9ab-11e7-89f4-c6b053eac242_0 aef8399af06a593e2faec702ec96747200084cadab8b41510604cf1759a7983f], namespace: "docker")
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.095586 27751 handler.go:325] Added event &{/kubepods/besteffort/pod106570f0-c9ab-11e7-89f4-c6b053eac242/aef8399af06a593e2faec702ec96747200084adab8b41510604cf1759a7983f 2017-11-15 02:17:02.951194711 +0000 UTC containerCreation {<nil>}}
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.095627 27751 container.go:409] Start housekeeping for container "/kubepods/besteffort/pod106570f0-c9ab-11e7-89f4-c6b053eac242/aef8399af06a5932faec702ec96747200084cadab8b41510604cf1759a7983f"
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.116967 27751 cni.go:326] Got netns path /proc/2206/ns/net
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.116998 27751 cni.go:327] Using netns path default
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.117129 27751 cni.go:298] About to add CNI network weave (type=weave-net)
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.162447 27751 config.go:282] Setting pods for source api
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.164516 27751 config.go:404] Receiving a new pod "snginx-3_default(123d32e8-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.431330 27751 config.go:282] Setting pods for source api
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.472654 27751 config.go:404] Receiving a new pod "snginx-41_default(124c80b8-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.504600 27751 prober.go:160] HTTP-Probe Host: http://10.32.0.2, Port: 10054, Path: /metrics
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.504626 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.518452 27751 http.go:96] Probe succeeded for http://10.32.0.2:10054/metrics, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Type:[text/plain;version=0.0.4] Date:[Wed, 15 Nov 2017 02:17:04 GMT]] 0xc421638ea0 -1 [] true true map[] 0xc420c90400 <nil>}
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.518530 27751 prober.go:113] Liveness probe for "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242):sidecar" succeede
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.591026 27751 config.go:282] Setting pods for source api
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.594301 27751 config.go:404] Receiving a new pod "snginx-58_default(12526146-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.596566 27751 kuberuntime_manager.go:640] Created PodSandbox "aef8399af06a593e2faec702ec96747200084cadab8b41510604cf1759a7983f" for pod "sngin-1_default(106570f0-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.601998 27751 config.go:282] Setting pods for source api
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.613639 27751 config.go:404] Receiving a new pod "snginx-20_default(125d0f6c-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.841184 27751 config.go:282] Setting pods for source api
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.847705 27751 config.go:404] Receiving a new pod "snginx-21_default(12634156-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.848663 27751 config.go:282] Setting pods for source api
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.851649 27751 config.go:404] Receiving a new pod "snginx-42_default(1275c2ca-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.853417 27751 config.go:282] Setting pods for source api
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.882551 27751 config.go:404] Receiving a new pod "snginx-56_default(1275fae4-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.883365 27751 config.go:282] Setting pods for source api
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.894587 27751 config.go:404] Receiving a new pod "snginx-9_default(1275c6f5-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.900636 27751 config.go:282] Setting pods for source api
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.905502 27751 config.go:404] Receiving a new pod "snginx-44_default(127ec404-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.923953 27751 config.go:282] Setting pods for source api
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.958257 27751 config.go:404] Receiving a new pod "snginx-53_default(12867215-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:04 af867b kubelet[27751]: I1115 02:17:04.994539 27751 config.go:282] Setting pods for source api
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.008229 27751 config.go:404] Receiving a new pod "snginx-2_default(127ed64d-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.231553 27751 config.go:282] Setting pods for source api
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.235551 27751 config.go:404] Receiving a new pod "snginx-48_default(12b81849-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.235809 27751 config.go:282] Setting pods for source api
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.239541 27751 config.go:404] Receiving a new pod "snginx-61_default(12b2d816-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.240148 27751 config.go:282] Setting pods for source api
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.243432 27751 config.go:404] Receiving a new pod "snginx-29_default(12b0c1b4-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.245678 27751 config.go:282] Setting pods for source api
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.254601 27751 config.go:404] Receiving a new pod "snginx-40_default(12b094bf-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.257408 27751 config.go:282] Setting pods for source api
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.260393 27751 config.go:404] Receiving a new pod "snginx-5_default(12b8bcf8-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.364895 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 6784, Path: /status
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.364939 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.377892 27751 http.go:96] Probe succeeded for http://127.0.0.1:6784/status, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 02:1:05 GMT] Content-Length:[445] Content-Type:[text/plain; charset=utf-8]] 0xc421398a40 445 [] true false map[] 0xc4200ddb00 <nil>}
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.377949 27751 prober.go:113] Liveness probe for "weave-net-rg7fn_kube-system(b77b0858-c9a8-11e7-89f4-c6b053eac242):weave" succeeded
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.382026 27751 config.go:282] Setting pods for source api
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.387229 27751 config.go:404] Receiving a new pod "snginx-88_default(12e591cf-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.388303 27751 config.go:282] Setting pods for source api
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.390688 27751 config.go:404] Receiving a new pod "snginx-66_default(12d0b0e9-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.392464 27751 config.go:282] Setting pods for source api
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.399026 27751 config.go:404] Receiving a new pod "snginx-23_default(12ce91d1-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.399128 27751 config.go:282] Setting pods for source api
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.402667 27751 config.go:404] Receiving a new pod "snginx-38_default(12e1b910-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.413546 27751 config.go:282] Setting pods for source api
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.417415 27751 config.go:404] Receiving a new pod "snginx-70_default(12df4528-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.536955 27751 config.go:282] Setting pods for source api
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.548055 27751 config.go:404] Receiving a new pod "snginx-77_default(12fcc791-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.549460 27751 config.go:282] Setting pods for source api
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.572245 27751 config.go:404] Receiving a new pod "snginx-95_default(13008e85-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.581520 27751 config.go:282] Setting pods for source api
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.591515 27751 docker_sandbox.go:691] Will attempt to re-write config file /var/lib/docker/containers/e96d5281f1655945493aeaeeae76261f1190dfd412f7d754f19ce7b72309658/resolv.conf with:
Nov 15 02:17:05 af867b kubelet[27751]: [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local opcwlaas.oraclecloud.internal. options ndots:5]
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.591653 27751 plugins.go:392] Calling network plugin cni to set up pod "snginx-11_default"
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.599403 27751 factory.go:112] Using factory "docker" for container "/kubepods/besteffort/pod10c857c2-c9ab-11e7-89f4-c6b053eac242/e96d5281f165545493aeaeeae76261f1190dfd41d2f7d754f19ce7b72309658"
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.600275 27751 config.go:404] Receiving a new pod "snginx-63_default(130cfe94-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.600445 27751 config.go:282] Setting pods for source api
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.611839 27751 config.go:404] Receiving a new pod "snginx-69_default(1315f5b8-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.612467 27751 config.go:282] Setting pods for source api
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.631771 27751 config.go:404] Receiving a new pod "snginx-54_default(131477f7-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.796973 27751 config.go:282] Setting pods for source api
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.802027 27751 config.go:404] Receiving a new pod "snginx-13_default(133bae95-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.929075 27751 kubelet.go:1837] SyncLoop (ADD, "api"): "snginx-47_default(114df326-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.929194 27751 kubelet.go:1913] SyncLoop (housekeeping)
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.929616 27751 generic.go:146] GenericPLEG: 106570f0-c9ab-11e7-89f4-c6b053eac242/aef8399af06a593e2faec702ec96747200084cadab8b41510604cf1759a798f: non-existent -> running
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.930476 27751 kubelet_pods.go:1284] Generating status for "snginx-47_default(114df326-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.930837 27751 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.949769 27751 volume_manager.go:337] Waiting for volumes to attach and mount for pod "snginx-47_default(114df326-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.974942 27751 config.go:282] Setting pods for source api
Nov 15 02:17:05 af867b kubelet[27751]: I1115 02:17:05.982327 27751 config.go:404] Receiving a new pod "snginx-71_default(13482d80-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.023985 27751 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-qjbsf" (UniqueName: "kuberetes.io/secret/114df326-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-47" (UID: "114df326-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.034822 27751 config.go:282] Setting pods for source api
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.039675 27751 config.go:404] Receiving a new pod "snginx-14_default(134dfefb-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.061925 27751 config.go:282] Setting pods for source api
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.062935 27751 status_manager.go:451] Status for pod "snginx-47_default(114df326-c9ab-11e7-89f4-c6b053eac242)" updated successfully: (1, {PhasePending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:17:05 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTie:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:17:05 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [nginx]} {Type:PodScheduled Status:True LastProbeTme:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-11-15 02:17:02 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.196.65.210 PodIP: StartTime:2017-11-15 02:17:05 +0000 UTC InitContainrStatuses:[] ContainerStatuses:[{Name:nginx State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Termiated:nil} Ready:false RestartCount:0 Image:nginx ImageID: ContainerID:}] QOSClass:BestEffort})
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.068545 27751 config.go:404] Receiving a new pod "snginx-97_default(135fa4d9-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.071311 27751 config.go:282] Setting pods for source api
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.124549 27751 reconciler.go:257] operationExecutor.MountVolume started for volume "default-token-qjbsf" (UniqueName: "kubernetes.io/secret/114f326-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-47" (UID: "114df326-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.124631 27751 secret.go:186] Setting up volume default-token-qjbsf for pod 114df326-c9ab-11e7-89f4-c6b053eac242 at /var/lib/kubelet/pods/114df26-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.124828 27751 empty_dir.go:264] pod 114df326-c9ab-11e7-89f4-c6b053eac242: mounting tmpfs for volume wrapped_default-token-qjbsf
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.124850 27751 mount_linux.go:135] Mounting cmd (systemd-run) with arguments ([--description=Kubernetes transient mount for /var/lib/kubelet/pos/114df326-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf --scope -- mount -t tmpfs tmpfs /var/lib/kubelet/pods/114df326-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetesio~secret/default-token-qjbsf])
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.228200 27751 secret.go:217] Received secret default/default-token-qjbsf containing (3) pieces of data, 1878 total bytes
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.228307 27751 atomic_writer.go:145] pod default/snginx-47 volume default-token-qjbsf: write required for target directory /var/lib/kubelet/pod/114df326-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.228397 27751 atomic_writer.go:160] pod default/snginx-47 volume default-token-qjbsf: performed write of new data to ts data directory: /var/lb/kubelet/pods/114df326-c9ab-11e7-89f4-c6b053eac242/volumes/kubernetes.io~secret/default-token-qjbsf/..119811_15_11_02_17_06.068422170
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.228483 27751 operation_generator.go:484] MountVolume.SetUp succeeded for volume "default-token-qjbsf" (UniqueName: "kubernetes.io/secret/114d326-c9ab-11e7-89f4-c6b053eac242-default-token-qjbsf") pod "snginx-47" (UID: "114df326-c9ab-11e7-89f4-c6b053eac242")
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.228799 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"snginx-47", UID:"114df326-c9ab-11e7-89f4-c6b053ea242", APIVersion:"v1", ResourceVersion:"1367", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "default-token-qjbsf"
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.258550 27751 volume_manager.go:366] All volumes are attached and mounted for pod "snginx-47_default(114df326-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.258594 27751 kuberuntime_manager.go:370] No sandbox for pod "snginx-47_default(114df326-c9ab-11e7-89f4-c6b053eac242)" can be found. Need to sart a new one
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.258608 27751 kuberuntime_manager.go:556] computePodActions got {KillPod:true CreateSandbox:true SandboxID: Attempt:0 NextInitContainerToStartnil ContainersToStart:[0] ContainersToKill:map[]} for pod "snginx-47_default(114df326-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.258664 27751 kuberuntime_manager.go:565] SyncPod received new pod "snginx-47_default(114df326-c9ab-11e7-89f4-c6b053eac242)", will create a sadbox for it
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.258673 27751 kuberuntime_manager.go:574] Stopping PodSandbox for "snginx-47_default(114df326-c9ab-11e7-89f4-c6b053eac242)", will start new on
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.258693 27751 kuberuntime_manager.go:626] Creating sandbox for pod "snginx-47_default(114df326-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.292018 27751 kubelet.go:2092] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: mesage:
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.420750 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/besteffort/pod114df326-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.420782 27751 docker_service.go:407] Setting cgroup parent to: "/kubepods/besteffort/pod114df326-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.421369 27751 cni.go:326] Got netns path /proc/2335/ns/net
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.421378 27751 cni.go:327] Using netns path default
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.421490 27751 cni.go:298] About to add CNI network cni-loopback (type=loopback)
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.470939 27751 manager.go:932] Added container: "/kubepods/besteffort/pod10c857c2-c9ab-11e7-89f4-c6b053eac242/e96d5281f1655945493aeaeeae76261f190dfd41d2f7d754f19ce7b72309658" (aliases: [k8s_POD_snginx-11_default_10c857c2-c9ab-11e7-89f4-c6b053eac242_0 e96d5281f1655945493aeaeeae76261f1190dfd41d2f7d754f19ce7b72309658], namespace: "docker")
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.475580 27751 handler.go:325] Added event &{/kubepods/besteffort/pod10c857c2-c9ab-11e7-89f4-c6b053eac242/e96d5281f1655945493aeaeeae76261f1190dd41d2f7d754f19ce7b72309658 2017-11-15 02:17:04.153194711 +0000 UTC containerCreation {<nil>}}
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.475640 27751 factory.go:116] Factory "docker" was unable to handle container "/kubepods/besteffort/pod114df326-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.475659 27751 factory.go:105] Error trying to work out if we can handle /kubepods/besteffort/pod114df326-c9ab-11e7-89f4-c6b053eac242: /kubepod/besteffort/pod114df326-c9ab-11e7-89f4-c6b053eac242 not handled by systemd handler
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.475667 27751 factory.go:116] Factory "systemd" was unable to handle container "/kubepods/besteffort/pod114df326-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.475677 27751 factory.go:112] Using factory "raw" for container "/kubepods/besteffort/pod114df326-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.480816 27751 container.go:409] Start housekeeping for container "/kubepods/besteffort/pod10c857c2-c9ab-11e7-89f4-c6b053eac242/e96d5281f165594493aeaeeae76261f1190dfd41d2f7d754f19ce7b72309658"
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.481736 27751 cni.go:326] Got netns path /proc/2335/ns/net
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.481754 27751 cni.go:327] Using netns path default
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.482135 27751 cni.go:298] About to add CNI network weave (type=weave-net)
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.529495 27751 kuberuntime_manager.go:654] Determined the ip "10.32.0.7" for pod "snginx-1_default(106570f0-c9ab-11e7-89f4-c6b053eac242)" aftersandbox changed
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.545901 27751 kuberuntime_manager.go:705] Creating container &Container{Name:nginx,Image:nginx,Command:[],Args:[],WorkingDir:,Ports:[{ 0 80 TC }],Env:[],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[{default-token-qjbsf true /var/run/secrets/kubernetes.io/serviceaccount <nil>}],LivenessProb:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[],TerminationMessagePolic:File,} in pod snginx-1_default(106570f0-c9ab-11e7-89f4-c6b053eac242)
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.620780 27751 eviction_manager.go:221] eviction manager: synchronize housekeeping
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.622049 27751 manager.go:932] Added container: "/kubepods/besteffort/pod114df326-c9ab-11e7-89f4-c6b053eac242" (aliases: [], namespace: "")
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.626649 27751 handler.go:325] Added event &{/kubepods/besteffort/pod114df326-c9ab-11e7-89f4-c6b053eac242 2017-11-15 02:17:05.945194711 +0000 UC containerCreation {<nil>}}
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.664752 27751 server.go:227] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"snginx-1", UID:"106570f0-c9ab-11e7-89f4-c6b053eac42", APIVersion:"v1", ResourceVersion:"1337", FieldPath:"spec.containers{nginx}"}): type: 'Normal' reason: 'Pulling' pulling image "nginx"
Nov 15 02:17:06 af867b kubelet[27751]: I1115 02:17:06.665245 27751 container.go:409] Start housekeeping for container "/kubepods/besteffort/pod114df326-c9ab-11e7-89f4-c6b053eac242"
Nov 15 02:17:07 af867b kubelet[27751]: I1115 02:17:07.016699 27751 kuberuntime_manager.go:640] Created PodSandbox "e96d5281f1655945493aeaeeae76261f1190dfd41d2f7d754f19ce7b72309658" for pod "sngin-11_default(10c857c2-c9ab-11e7-89f4-c6b053eac242)"
Nov 15 02:17:07 af867b kubelet[27751]: I1115 02:17:07.600828 27751 prober.go:160] HTTP-Probe Host: http://10.32.0.2, Port: 8081, Path: /readiness
Nov 15 02:17:07 af867b kubelet[27751]: I1115 02:17:07.600869 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:17:07 af867b kubelet[27751]: I1115 02:17:07.609885 27751 http.go:96] Probe succeeded for http://10.32.0.2:8081/readiness, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Wed, 15 Nov 2017 0:17:07 GMT] Content-Length:[3] Content-Type:[text/plain; charset=utf-8]] 0xc421a252a0 3 [] true false map[] 0xc420c90400 <nil>}
Nov 15 02:17:07 af867b kubelet[27751]: I1115 02:17:07.609945 27751 prober.go:113] Readiness probe for "kube-dns-545bc4bfd4-zvfqd_kube-system(97270c63-c9a8-11e7-89f4-c6b053eac242):kubedns" succeedd
Nov 15 02:17:07 af867b kubelet[27751]: I1115 02:17:07.682899 27751 prober.go:160] HTTP-Probe Host: http://127.0.0.1, Port: 10251, Path: /healthz
Nov 15 02:17:07 af867b kubelet[27751]: I1115 02:17:07.682944 27751 prober.go:163] HTTP-Probe Headers: map[]
Nov 15 02:17:07 af867b kubelet[27751]: I1115 02:17:07.703933 27751 ht
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment