Skip to content

Instantly share code, notes, and snippets.

@SamLiu79
Created March 2, 2017 03:41
Show Gist options
  • Save SamLiu79/541915cb6cc7a053820821e931085ab5 to your computer and use it in GitHub Desktop.
Save SamLiu79/541915cb6cc7a053820821e931085ab5 to your computer and use it in GitHub Desktop.
W0302 01:46:22.865416 1 vsphere.go:356] Creating new client session since the existing session is not valid or not authenticated
I0302 01:48:56.253726 1 reconciler.go:213] Started AttachVolume for volume "kubernetes.io/vsphere-volume/[wcdc_nonprod_awl_02_42_3010] kubevols/kubernetes-dynamic-pvc-0a10e2be-feea-11e6-a1a3-0050569c42d2.vmdk" to node "k8s-02n.dev.mynodes.com"
I0302 01:48:58.658099 1 operation_executor.go:620] AttachVolume.Attach succeeded for volume "kubernetes.io/vsphere-volume/[wcdc_nonprod_awl_02_42_3010] kubevols/kubernetes-dynamic-pvc-0a10e2be-feea-11e6-a1a3-0050569c42d2.vmdk" (spec.Name: "pvc-0a10e2be-feea-11e6-a1a3-0050569c42d2") from node "k8s-02n.dev.mynodes.com".
I0302 01:52:38.694350 1 replication_controller.go:322] Observed updated replication controller community. Desired pod count change: 3->3
I0302 01:52:38.694398 1 replication_controller.go:322] Observed updated replication controller postgres. Desired pod count change: 1->1
I0302 01:52:38.694413 1 replication_controller.go:322] Observed updated replication controller redis. Desired pod count change: 5->5
I0302 01:52:38.694426 1 replication_controller.go:322] Observed updated replication controller kube-coredns. Desired pod count change: 1->1
W0302 01:56:27.756605 1 reflector.go:319] pkg/controller/garbagecollector/garbagecollector.go:768: watch of <nil> ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [1972319/1971978]) [1973318]
I0302 02:02:38.694534 1 replication_controller.go:322] Observed updated replication controller community. Desired pod count change: 3->3
I0302 02:02:38.694778 1 replication_controller.go:322] Observed updated replication controller postgres. Desired pod count change: 1->1
I0302 02:02:38.694862 1 replication_controller.go:322] Observed updated replication controller redis. Desired pod count change: 5->5
I0302 02:02:38.694931 1 replication_controller.go:322] Observed updated replication controller kube-coredns. Desired pod count change: 1->1
I0302 02:12:38.694765 1 replication_controller.go:322] Observed updated replication controller community. Desired pod count change: 3->3
I0302 02:12:38.694935 1 replication_controller.go:322] Observed updated replication controller postgres. Desired pod count change: 1->1
I0302 02:12:38.695020 1 replication_controller.go:322] Observed updated replication controller redis. Desired pod count change: 5->5
I0302 02:12:38.695099 1 replication_controller.go:322] Observed updated replication controller kube-coredns. Desired pod count change: 1->1
W0302 02:14:39.765791 1 reflector.go:319] pkg/controller/garbagecollector/garbagecollector.go:768: watch of <nil> ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [1974054/1973321]) [1975053]
I0302 02:22:38.694864 1 replication_controller.go:322] Observed updated replication controller community. Desired pod count change: 3->3
I0302 02:22:38.695097 1 replication_controller.go:322] Observed updated replication controller postgres. Desired pod count change: 1->1
I0302 02:22:38.695161 1 replication_controller.go:322] Observed updated replication controller redis. Desired pod count change: 5->5
I0302 02:22:38.695355 1 replication_controller.go:322] Observed updated replication controller kube-coredns. Desired pod count change: 1->1
W0302 02:31:10.777326 1 reflector.go:319] pkg/controller/garbagecollector/garbagecollector.go:768: watch of <nil> ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [1975629/1975055]) [1976628]
I0302 02:32:38.695138 1 replication_controller.go:322] Observed updated replication controller community. Desired pod count change: 3->3
I0302 02:32:38.695218 1 replication_controller.go:322] Observed updated replication controller postgres. Desired pod count change: 1->1
I0302 02:32:38.695235 1 replication_controller.go:322] Observed updated replication controller redis. Desired pod count change: 5->5
I0302 02:32:38.695247 1 replication_controller.go:322] Observed updated replication controller kube-coredns. Desired pod count change: 1->1
I0302 02:42:38.695366 1 replication_controller.go:322] Observed updated replication controller community. Desired pod count change: 3->3
I0302 02:42:38.695519 1 replication_controller.go:322] Observed updated replication controller postgres. Desired pod count change: 1->1
I0302 02:42:38.695576 1 replication_controller.go:322] Observed updated replication controller redis. Desired pod count change: 5->5
I0302 02:42:38.695650 1 replication_controller.go:322] Observed updated replication controller kube-coredns. Desired pod count change: 1->1
W0302 02:48:08.786402 1 reflector.go:319] pkg/controller/garbagecollector/garbagecollector.go:768: watch of <nil> ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [1977252/1976631]) [1978251]
I0302 02:52:38.695721 1 replication_controller.go:322] Observed updated replication controller community. Desired pod count change: 3->3
I0302 02:52:38.695811 1 replication_controller.go:322] Observed updated replication controller postgres. Desired pod count change: 1->1
I0302 02:52:38.695829 1 replication_controller.go:322] Observed updated replication controller redis. Desired pod count change: 5->5
I0302 02:52:38.695842 1 replication_controller.go:322] Observed updated replication controller kube-coredns. Desired pod count change: 1->1
W0302 03:02:12.801237 1 reflector.go:319] pkg/controller/garbagecollector/garbagecollector.go:768: watch of <nil> ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [1978596/1978255]) [1979595]
I0302 03:02:38.695958 1 replication_controller.go:322] Observed updated replication controller community. Desired pod count change: 3->3
I0302 03:02:38.696029 1 replication_controller.go:322] Observed updated replication controller postgres. Desired pod count change: 1->1
I0302 03:02:38.696044 1 replication_controller.go:322] Observed updated replication controller redis. Desired pod count change: 5->5
I0302 03:02:38.696058 1 replication_controller.go:322] Observed updated replication controller kube-coredns. Desired pod count change: 1->1
I0302 03:12:38.696175 1 replication_controller.go:322] Observed updated replication controller kube-coredns. Desired pod count change: 1->1
I0302 03:12:38.696246 1 replication_controller.go:322] Observed updated replication controller community. Desired pod count change: 3->3
I0302 03:12:38.696261 1 replication_controller.go:322] Observed updated replication controller postgres. Desired pod count change: 1->1
I0302 03:12:38.696273 1 replication_controller.go:322] Observed updated replication controller redis. Desired pod count change: 5->5
W0302 03:17:24.818043 1 reflector.go:319] pkg/controller/garbagecollector/garbagecollector.go:768: watch of <nil> ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [1980044/1979597]) [1981043]
I0302 03:22:38.696348 1 replication_controller.go:322] Observed updated replication controller postgres. Desired pod count change: 1->1
I0302 03:22:38.696394 1 replication_controller.go:322] Observed updated replication controller redis. Desired pod count change: 5->5
I0302 03:22:38.696409 1 replication_controller.go:322] Observed updated replication controller kube-coredns. Desired pod count change: 1->1
I0302 03:22:38.696421 1 replication_controller.go:322] Observed updated replication controller community. Desired pod count change: 3->3
I0302 03:24:05.377082 1 reconciler.go:178] Started DetachVolume for volume "kubernetes.io/vsphere-volume/[wcdc_nonprod_awl_02_42_3010] kubevols/kubernetes-dynamic-pvc-0a10e2be-feea-11e6-a1a3-0050569c42d2.vmdk" from node "k8s-02n.dev.mynodes.com"
I0302 03:24:05.380248 1 operation_executor.go:754] Verified volume is safe to detach for volume "kubernetes.io/vsphere-volume/[wcdc_nonprod_awl_02_42_3010] kubevols/kubernetes-dynamic-pvc-0a10e2be-feea-11e6-a1a3-0050569c42d2.vmdk" (spec.Name: "pvc-0a10e2be-feea-11e6-a1a3-0050569c42d2") from node "k8s-02n.dev.mynodes.com".
I0302 03:24:08.533901 1 operation_executor.go:700] DetachVolume.Detach succeeded for volume "kubernetes.io/vsphere-volume/[wcdc_nonprod_awl_02_42_3010] kubevols/kubernetes-dynamic-pvc-0a10e2be-feea-11e6-a1a3-0050569c42d2.vmdk" (spec.Name: "pvc-0a10e2be-feea-11e6-a1a3-0050569c42d2") from node "k8s-02n.dev.mynodes.com".
I0302 03:24:31.731822 1 reconciler.go:213] Started AttachVolume for volume "kubernetes.io/vsphere-volume/[wcdc_nonprod_awl_02_42_3010] kubevols/kubernetes-dynamic-pvc-0a10e2be-feea-11e6-a1a3-0050569c42d2.vmdk" to node "k8s-02nm.dev.mynodes.com"
I0302 03:24:34.036612 1 operation_executor.go:620] AttachVolume.Attach succeeded for volume "kubernetes.io/vsphere-volume/[wcdc_nonprod_awl_02_42_3010] kubevols/kubernetes-dynamic-pvc-0a10e2be-feea-11e6-a1a3-0050569c42d2.vmdk" (spec.Name: "pvc-0a10e2be-feea-11e6-a1a3-0050569c42d2") from node "k8s-02nm.dev.mynodes.com".
W0302 03:28:45.828689 1 reflector.go:319] pkg/controller/garbagecollector/garbagecollector.go:768: watch of <nil> ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [1981134/1981047]) [1982133]
I0302 03:32:38.696590 1 replication_controller.go:322] Observed updated replication controller community. Desired pod count change: 3->3
I0302 03:32:38.696650 1 replication_controller.go:322] Observed updated replication controller postgres. Desired pod count change: 1->1
I0302 03:32:38.696665 1 replication_controller.go:322] Observed updated replication controller redis. Desired pod count change: 5->5
I0302 03:32:38.696678 1 replication_controller.go:322] Observed updated replication controller kube-coredns. Desired pod count change: 1->1
Mar 01 03:58:02 k8s-02m.mynodes.com kubelet-wrapper[1578]: + exec /usr/bin/rkt run --volume dns,kind=host,source=/etc/resolv.conf --mount volume=dns,target=/etc/resolv.conf --volume var-lo
Mar 01 03:58:02 k8s-02m.mynodes.com kubelet-wrapper[1578]: /fluentd-ds-ready=true,lb=yes --cloud-config=/etc/vsphere.conf --cloud-provider=vsphere
Mar 01 03:58:08 k8s-02m.mynodes.com kubelet-wrapper[1578]: pubkey: prefix: "quay.io/coreos/hyperkube"
Mar 01 03:58:08 k8s-02m.mynodes.com kubelet-wrapper[1578]: key: "https://quay.io/aci-signing-key"
Mar 01 03:58:08 k8s-02m.mynodes.com kubelet-wrapper[1578]: gpg key fingerprint is: BFF3 13CD AA56 0B16 A898 7B8F 72AB F5F6 799D 33BC
Mar 01 03:58:08 k8s-02m.mynodes.com kubelet-wrapper[1578]: Quay.io ACI Converter (ACI conversion signing key) <support@quay.io>
Mar 01 03:58:08 k8s-02m.mynodes.com kubelet-wrapper[1578]: Trusting "https://quay.io/aci-signing-key" for prefix "quay.io/coreos/hyperkube" without fingerprint review.
Mar 01 03:58:08 k8s-02m.mynodes.com kubelet-wrapper[1578]: Added key for prefix "quay.io/coreos/hyperkube" at "/etc/rkt/trustedkeys/prefix.d/quay.io/coreos/hyperkube/bff313cdaa560b16a8987b
Mar 01 03:58:08 k8s-02m.mynodes.com kubelet-wrapper[1578]: Downloading signature: 0 B/473 B
Mar 01 03:58:08 k8s-02m.mynodes.com kubelet-wrapper[1578]: Downloading signature: 473 B/473 B
Mar 01 03:58:08 k8s-02m.mynodes.com kubelet-wrapper[1578]: Downloading signature: 473 B/473 B
Mar 01 03:58:09 k8s-02m.mynodes.com kubelet-wrapper[1578]: Downloading ACI: 0 B/230 MB
Mar 01 03:58:09 k8s-02m.mynodes.com kubelet-wrapper[1578]: Downloading ACI: 16.3 KB/230 MB
Mar 01 03:58:10 k8s-02m.mynodes.com kubelet-wrapper[1578]: Downloading ACI: 1.18 MB/230 MB
Mar 01 03:58:11 k8s-02m.mynodes.com kubelet-wrapper[1578]: Downloading ACI: 7.29 MB/230 MB
Mar 01 03:58:12 k8s-02m.mynodes.com kubelet-wrapper[1578]: Downloading ACI: 26.1 MB/230 MB
Mar 01 03:58:13 k8s-02m.mynodes.com kubelet-wrapper[1578]: Downloading ACI: 62 MB/230 MB
Mar 01 03:58:14 k8s-02m.mynodes.com kubelet-wrapper[1578]: Downloading ACI: 93.8 MB/230 MB
Mar 01 03:58:15 k8s-02m.mynodes.com kubelet-wrapper[1578]: Downloading ACI: 138 MB/230 MB
Mar 01 03:58:16 k8s-02m.mynodes.com kubelet-wrapper[1578]: Downloading ACI: 182 MB/230 MB
Mar 01 03:58:17 k8s-02m.mynodes.com kubelet-wrapper[1578]: Downloading ACI: 215 MB/230 MB
Mar 01 03:58:17 k8s-02m.mynodes.com kubelet-wrapper[1578]: Downloading ACI: 230 MB/230 MB
Mar 01 03:58:22 k8s-02m.mynodes.com kubelet-wrapper[1578]: image: signature verified:
Mar 01 03:58:22 k8s-02m.mynodes.com kubelet-wrapper[1578]: Quay.io ACI Converter (ACI conversion signing key) <support@quay.io>
Mar 01 03:58:57 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:58:57.990473 1578 feature_gate.go:189] feature gates: map[StreamingProxyRedirects:true ExperimentalHostUserNamespaceD
Mar 01 03:58:57 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:58:57.996550 1578 server.go:217] Starting Kubelet configuration sync loop
Mar 01 03:58:58 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:58:58.592639 1578 server.go:369] Successfully initialized cloud provider: "vsphere" from the config file: "/etc/vsphe
Mar 01 03:58:58 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:58:58.596033 1578 docker.go:356] Connecting to docker on unix:///var/run/docker.sock
Mar 01 03:58:58 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:58:58.596098 1578 docker.go:376] Start docker client with request timeout=2m0s
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.862291 1578 server.go:511] cloud provider determined current node name to be k8s-02m.mynodes.com
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.862892 1578 manager.go:143] cAdvisor running in container: "/system.slice/kubelet.service"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: W0301 03:59:00.868792 1578 manager.go:151] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tc
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.872149 1578 fs.go:117] Filesystem partitions: map[/dev/sda9:{mountpoint:/var/lib/docker/overlay major:8 minor:9
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.878799 1578 manager.go:198] Machine: {NumCores:4 CpuFrequency:2799999 MemoryCapacity:8376045568 MachineID:b07a1
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.879439 1578 manager.go:204] Version: {KernelVersion:4.7.3-coreos-r3 ContainerOsVersion:Container Linux by CoreO
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.881742 1578 oom_linux.go:64] attempting to set "/proc/self/oom_score_adj" to "-999"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.888668 1578 server.go:511] cloud provider determined current node name to be k8s-02m.mynodes.com
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.888789 1578 server.go:666] Sending events to api server.
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.888886 1578 server.go:700] Using root directory: /var/lib/kubelet
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.891288 1578 kubelet.go:307] cloud provider determined current node name to be k8s-02m.mynodes.com
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.891353 1578 kubelet.go:242] Adding manifest file: /etc/kubernetes/manifests
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.891402 1578 file.go:47] Watching path "/etc/kubernetes/manifests"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.891415 1578 kubelet.go:252] Watching apiserver
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.891454 1578 reflector.go:185] Starting reflector *api.Pod (0s) from pkg/kubelet/config/apiserver.go:44
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.891547 1578 reflector.go:185] Starting reflector *api.Service (0s) from pkg/kubelet/kubelet.go:378
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.891599 1578 reflector.go:185] Starting reflector *api.Node (0s) from pkg/kubelet/kubelet.go:386
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.892779 1578 file.go:156] Reading config file "/etc/kubernetes/manifests/kube-apiserver.yaml"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.894692 1578 reflector.go:234] Listing and watching *api.Pod from pkg/kubelet/config/apiserver.go:44
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.894799 1578 reflector.go:234] Listing and watching *api.Service from pkg/kubelet/kubelet.go:378
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.895247 1578 reflector.go:234] Listing and watching *api.Node from pkg/kubelet/kubelet.go:386
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.895522 1578 file.go:156] Reading config file "/etc/kubernetes/manifests/kube-controller-manager.yaml"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.897096 1578 file.go:156] Reading config file "/etc/kubernetes/manifests/kube-proxy.yaml"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.897920 1578 file.go:156] Reading config file "/etc/kubernetes/manifests/kube-scheduler.yaml"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.898382 1578 config.go:281] Setting pods for source file
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.898553 1578 config.go:397] Receiving a new pod "kube-controller-manager-k8s-02m.mynodes.com_kube-sy
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.898576 1578 config.go:397] Receiving a new pod "kube-proxy-k8s-02m.mynodes.com_kube-system(465d9f30
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.898588 1578 config.go:397] Receiving a new pod "kube-scheduler-k8s-02m.mynodes.com_kube-system(35b2
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.898602 1578 config.go:397] Receiving a new pod "kube-apiserver-k8s-02m.mynodes.com_kube-system(124b
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.898725 1578 config.go:281] Setting pods for source api
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.899343 1578 config.go:397] Receiving a new pod "traefik-ingress-controller-1ws94_kube-system(84a7b4c6-fcf4-11e6
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.899374 1578 config.go:397] Receiving a new pod "calico-node-rg69b_kube-system(75640253-fcf4-11e6-824e-0050569c4
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.899388 1578 config.go:397] Receiving a new pod "kube-controller-manager-k8s-02m.mynodes.com_kube-sy
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.899399 1578 config.go:397] Receiving a new pod "kube-proxy-k8s-02m.mynodes.com_kube-system(4ca2eba6
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.899410 1578 config.go:397] Receiving a new pod "kube-scheduler-k8s-02m.mynodes.com_kube-system(fc7d
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.899421 1578 config.go:397] Receiving a new pod "redis-ztb0p_dev-a3-community(48d7880f-fe31-11e6-821f-0050569c5b
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.899431 1578 config.go:397] Receiving a new pod "kube-apiserver-k8s-02m.mynodes.com_kube-system(fd2d
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.901145 1578 kubelet.go:467] Experimental host user namespace defaulting is enabled.
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: W0301 03:59:00.901162 1578 kubelet_network.go:69] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.901180 1578 kubelet.go:477] Hairpin mode set to "hairpin-veth"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.901278 1578 plugins.go:181] Loaded network plugin "cni"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903245 1578 docker_manager.go:256] Setting dockerRoot to /var/lib/docker
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903259 1578 docker_manager.go:259] Setting cgroupDriver to cgroupfs
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903371 1578 plugins.go:56] Registering credential provider: .dockercfg
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903549 1578 plugins.go:344] Loaded volume plugin "kubernetes.io/aws-ebs"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903571 1578 plugins.go:344] Loaded volume plugin "kubernetes.io/empty-dir"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903582 1578 plugins.go:344] Loaded volume plugin "kubernetes.io/gce-pd"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903593 1578 plugins.go:344] Loaded volume plugin "kubernetes.io/git-repo"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903604 1578 plugins.go:344] Loaded volume plugin "kubernetes.io/host-path"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903615 1578 plugins.go:344] Loaded volume plugin "kubernetes.io/nfs"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903626 1578 plugins.go:344] Loaded volume plugin "kubernetes.io/secret"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903637 1578 plugins.go:344] Loaded volume plugin "kubernetes.io/iscsi"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903649 1578 plugins.go:344] Loaded volume plugin "kubernetes.io/glusterfs"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.897096 1578 file.go:156] Reading config file "/etc/kubernetes/manifests/kube-proxy.yaml"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.897920 1578 file.go:156] Reading config file "/etc/kubernetes/manifests/kube-scheduler.yaml"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.898382 1578 config.go:281] Setting pods for source file
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.898553 1578 config.go:397] Receiving a new pod "kube-controller-manager-k8s-02m.mynodes.com_kube-sy
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.898576 1578 config.go:397] Receiving a new pod "kube-proxy-k8s-02m.mynodes.com_kube-system(465d9f30
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.898588 1578 config.go:397] Receiving a new pod "kube-scheduler-k8s-02m.mynodes.com_kube-system(35b2
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.898602 1578 config.go:397] Receiving a new pod "kube-apiserver-k8s-02m.mynodes.com_kube-system(124b
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.898725 1578 config.go:281] Setting pods for source api
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.899343 1578 config.go:397] Receiving a new pod "traefik-ingress-controller-1ws94_kube-system(84a7b4c6-fcf4-11e6
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.899374 1578 config.go:397] Receiving a new pod "calico-node-rg69b_kube-system(75640253-fcf4-11e6-824e-0050569c4
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.899388 1578 config.go:397] Receiving a new pod "kube-controller-manager-k8s-02m.mynodes.com_kube-sy
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.899399 1578 config.go:397] Receiving a new pod "kube-proxy-k8s-02m.mynodes.com_kube-system(4ca2eba6
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.899410 1578 config.go:397] Receiving a new pod "kube-scheduler-k8s-02m.mynodes.com_kube-system(fc7d
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.899421 1578 config.go:397] Receiving a new pod "redis-ztb0p_dev-a3-community(48d7880f-fe31-11e6-821f-0050569c5b
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.899431 1578 config.go:397] Receiving a new pod "kube-apiserver-k8s-02m.mynodes.com_kube-system(fd2d
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.901145 1578 kubelet.go:467] Experimental host user namespace defaulting is enabled.
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: W0301 03:59:00.901162 1578 kubelet_network.go:69] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.901180 1578 kubelet.go:477] Hairpin mode set to "hairpin-veth"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.901278 1578 plugins.go:181] Loaded network plugin "cni"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903245 1578 docker_manager.go:256] Setting dockerRoot to /var/lib/docker
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903259 1578 docker_manager.go:259] Setting cgroupDriver to cgroupfs
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903371 1578 plugins.go:56] Registering credential provider: .dockercfg
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903549 1578 plugins.go:344] Loaded volume plugin "kubernetes.io/aws-ebs"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903571 1578 plugins.go:344] Loaded volume plugin "kubernetes.io/empty-dir"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903582 1578 plugins.go:344] Loaded volume plugin "kubernetes.io/gce-pd"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903593 1578 plugins.go:344] Loaded volume plugin "kubernetes.io/git-repo"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903604 1578 plugins.go:344] Loaded volume plugin "kubernetes.io/host-path"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903615 1578 plugins.go:344] Loaded volume plugin "kubernetes.io/nfs"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903626 1578 plugins.go:344] Loaded volume plugin "kubernetes.io/secret"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903637 1578 plugins.go:344] Loaded volume plugin "kubernetes.io/iscsi"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903649 1578 plugins.go:344] Loaded volume plugin "kubernetes.io/glusterfs"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903660 1578 plugins.go:344] Loaded volume plugin "kubernetes.io/rbd"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903671 1578 plugins.go:344] Loaded volume plugin "kubernetes.io/cinder"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903681 1578 plugins.go:344] Loaded volume plugin "kubernetes.io/quobyte"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903694 1578 plugins.go:344] Loaded volume plugin "kubernetes.io/cephfs"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903706 1578 plugins.go:344] Loaded volume plugin "kubernetes.io/downward-api"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903718 1578 plugins.go:344] Loaded volume plugin "kubernetes.io/fc"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903746 1578 plugins.go:344] Loaded volume plugin "kubernetes.io/flocker"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903759 1578 plugins.go:344] Loaded volume plugin "kubernetes.io/azure-file"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903772 1578 plugins.go:344] Loaded volume plugin "kubernetes.io/configmap"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903784 1578 plugins.go:344] Loaded volume plugin "kubernetes.io/vsphere-volume"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903797 1578 plugins.go:344] Loaded volume plugin "kubernetes.io/azure-disk"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.903808 1578 plugins.go:344] Loaded volume plugin "kubernetes.io/photon-pd"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.904414 1578 server.go:770] Started kubelet v1.5.3+coreos.0
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: E0301 03:59:00.904948 1578 kubelet.go:1145] Image garbage collection failed: unable to find data for container /
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.906142 1578 container_manager_linux.go:335] Updating kernel flag: vm/overcommit_memory, expected value: 1, actu
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.906217 1578 container_manager_linux.go:335] Updating kernel flag: kernel/panic, expected value: 10, actual valu
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.906396 1578 server.go:123] Starting to listen on 0.0.0.0:10250
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.908808 1578 server.go:140] Starting to listen read-only on 0.0.0.0:10255
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.909550 1578 server.go:664] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"k8s-02m.dev.activenetwo
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.909575 1578 server.go:664] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"k8s-02m.dev.activenetwo
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.910062 1578 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.910086 1578 status_manager.go:129] Starting to sync pod status with apiserver
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.910099 1578 kubelet.go:1714] Starting kubelet main sync loop.
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.910118 1578 kubelet.go:1725] skipping pod synchronization - [container runtime is down]
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.910224 1578 kubelet.go:1138] Container garbage collection succeeded
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.910278 1578 oom_linux.go:64] attempting to set "/proc/1670/oom_score_adj" to "-999"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.910281 1578 volume_manager.go:240] The desired_state_of_world populator starts
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.910300 1578 volume_manager.go:242] Starting Kubelet Volume Manager
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: E0301 03:59:00.910320 1578 container_manager_linux.go:625] error opening pid file /run/docker/libcontainerd/docker-containerd.
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.910634 1578 iptables.go:362] running iptables -N [KUBE-MARK-DROP -t nat]
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.915584 1578 iptables.go:362] running iptables -C [KUBE-MARK-DROP -t nat -j MARK --set-xmark 0x00008000/0x000080
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.917839 1578 factory.go:295] Registering Docker factory
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: W0301 03:59:00.917867 1578 manager.go:247] Registration of the rkt container factory failed: unable to communicate with Rkt ap
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.917876 1578 factory.go:54] Registering systemd factory
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.918077 1578 factory.go:86] Registering Raw factory
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.918287 1578 manager.go:1106] Started watching for new ooms in manager
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.920462 1578 oomparser.go:185] oomparser using systemd
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.920539 1578 factory.go:104] Error trying to work out if we can handle /: invalid container name
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.920555 1578 factory.go:115] Factory "docker" was unable to handle container "/"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.920573 1578 factory.go:104] Error trying to work out if we can handle /: / not handled by systemd handler
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.920580 1578 factory.go:115] Factory "systemd" was unable to handle container "/"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.920593 1578 factory.go:111] Using factory "raw" for container "/"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.921316 1578 manager.go:898] Added container: "/" (aliases: [], namespace: "")
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.921602 1578 handler.go:325] Added event &{/ 2017-03-01 03:57:18.633 +0000 UTC containerCreation {<nil>}}
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.921639 1578 manager.go:288] Starting recovery of all containers
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.924927 1578 container.go:407] Start housekeeping for container "/"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.926119 1578 iptables.go:362] running iptables -A [KUBE-MARK-DROP -t nat -j MARK --set-xmark 0x00008000/0x000080
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.928589 1578 iptables.go:362] running iptables -N [KUBE-FIREWALL -t filter]
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.930673 1578 iptables.go:362] running iptables -C [KUBE-FIREWALL -t filter -m comment --comment kubernetes firew
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.933369 1578 iptables.go:362] running iptables -A [KUBE-FIREWALL -t filter -m comment --comment kubernetes firew
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.947340 1578 iptables.go:362] running iptables -C [OUTPUT -t filter -j KUBE-FIREWALL]
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.949170 1578 iptables.go:362] running iptables -I [OUTPUT -t filter -j KUBE-FIREWALL]
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.958538 1578 iptables.go:362] running iptables -C [INPUT -t filter -j KUBE-FIREWALL]
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.960603 1578 iptables.go:362] running iptables -I [INPUT -t filter -j KUBE-FIREWALL]
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.970278 1578 iptables.go:362] running iptables -N [KUBE-MARK-MASQ -t nat]
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.972196 1578 iptables.go:362] running iptables -N [KUBE-POSTROUTING -t nat]
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.974131 1578 iptables.go:362] running iptables -C [KUBE-MARK-MASQ -t nat -j MARK --set-xmark 0x00004000/0x000040
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.977337 1578 iptables.go:362] running iptables -A [KUBE-MARK-MASQ -t nat -j MARK --set-xmark 0x00004000/0x000040
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.979183 1578 iptables.go:362] running iptables -C [POSTROUTING -t nat -m comment --comment kubernetes postroutin
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.980950 1578 iptables.go:362] running iptables -I [POSTROUTING -t nat -m comment --comment kubernetes postroutin
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.983144 1578 iptables.go:362] running iptables -C [KUBE-POSTROUTING -t nat -m comment --comment kubernetes servi
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.985141 1578 iptables.go:362] running iptables -A [KUBE-POSTROUTING -t nat -m comment --comment kubernetes servi
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.010543 1578 kubelet_node_status.go:204] Setting node annotation to enable volume controller attach/detach
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.018135 1578 factory.go:104] Error trying to work out if we can handle /system.slice: invalid container name
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.018159 1578 factory.go:115] Factory "docker" was unable to handle container "/system.slice"
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.018173 1578 factory.go:104] Error trying to work out if we can handle /system.slice: /system.slice not handled
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.018180 1578 factory.go:115] Factory "systemd" was unable to handle container "/system.slice"
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.018188 1578 factory.go:111] Using factory "raw" for container "/system.slice"
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.018445 1578 manager.go:898] Added container: "/system.slice" (aliases: [], namespace: "")
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.018647 1578 handler.go:325] Added event &{/system.slice 2017-03-01 03:59:00.92197173 +0000 UTC containerCreatio
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.018713 1578 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-rkt-pods-run-0a4c95
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.018740 1578 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-rkt-pods-run
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.018758 1578 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-rkt-pods-run-0a4c953b
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.018776 1578 manager.go:867] ignoring container "/system.slice/var-lib-rkt-pods-run-0a4c953b\\x2dd385\\x2d4e00\\
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.018790 1578 factory.go:104] Error trying to work out if we can handle /system.slice/-.mount: invalid container
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.018797 1578 factory.go:115] Factory "docker" was unable to handle container "/system.slice/-.mount"
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.018804 1578 factory.go:108] Factory "systemd" can handle container "/system.slice/-.mount", but ignoring.
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.018813 1578 manager.go:867] ignoring container "/system.slice/-.mount"
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.018805 1578 container.go:407] Start housekeeping for container "/system.slice"
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.018823 1578 factory.go:104] Error trying to work out if we can handle /system.slice/dbus.service: invalid conta
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.018836 1578 factory.go:115] Factory "docker" was unable to handle container "/system.slice/dbus.service"
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.018844 1578 factory.go:104] Error trying to work out if we can handle /system.slice/dbus.service: /system.slice
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.018851 1578 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/dbus.service"
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.018858 1578 factory.go:111] Using factory "raw" for container "/system.slice/dbus.service"
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.019171 1578 manager.go:898] Added container: "/system.slice/dbus.service" (aliases: [], namespace: "")
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.019393 1578 handler.go:325] Added event &{/system.slice/dbus.service 2017-03-01 03:59:00.922971736 +0000 UTC co
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.019430 1578 factory.go:104] Error trying to work out if we can handle /system.slice/system-addon\x2dconfig.slic
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.019439 1578 factory.go:115] Factory "docker" was unable to handle container "/system.slice/system-addon\\x2dcon
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.019448 1578 factory.go:104] Error trying to work out if we can handle /system.slice/system-addon\x2dconfig.slic
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.019455 1578 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/system-addon\\x2dco
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.019463 1578 factory.go:111] Using factory "raw" for container "/system.slice/system-addon\\x2dconfig.slice"
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.019506 1578 container.go:407] Start housekeeping for container "/system.slice/dbus.service"
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.019744 1578 manager.go:898] Added container: "/system.slice/system-addon\\x2dconfig.slice" (aliases: [], namesp
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.019989 1578 handler.go:325] Added event &{/system.slice/system-addon\x2dconfig.slice 2017-03-01 03:59:00.928971
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.020022 1578 factory.go:104] Error trying to work out if we can handle /system.slice/system-getty.slice: invalid
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.020030 1578 factory.go:115] Factory "docker" was unable to handle container "/system.slice/system-getty.slice"
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.020042 1578 factory.go:104] Error trying to work out if we can handle /system.slice/system-getty.slice: /system
Mar 01 03:59:01 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:01.020049 1578 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/system-getty.slice"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.910634 1578 iptables.go:362] running iptables -N [KUBE-MARK-DROP -t nat]
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.915584 1578 iptables.go:362] running iptables -C [KUBE-MARK-DROP -t nat -j MARK --set-xmark 0x00008000/0x000080
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.917839 1578 factory.go:295] Registering Docker factory
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: W0301 03:59:00.917867 1578 manager.go:247] Registration of the rkt container factory failed: unable to communicate with Rkt ap
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.917876 1578 factory.go:54] Registering systemd factory
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.918077 1578 factory.go:86] Registering Raw factory
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.918287 1578 manager.go:1106] Started watching for new ooms in manager
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.920462 1578 oomparser.go:185] oomparser using systemd
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.920539 1578 factory.go:104] Error trying to work out if we can handle /: invalid container name
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.920555 1578 factory.go:115] Factory "docker" was unable to handle container "/"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.920573 1578 factory.go:104] Error trying to work out if we can handle /: / not handled by systemd handler
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.920580 1578 factory.go:115] Factory "systemd" was unable to handle container "/"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.920593 1578 factory.go:111] Using factory "raw" for container "/"
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.921316 1578 manager.go:898] Added container: "/" (aliases: [], namespace: "")
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.921602 1578 handler.go:325] Added event &{/ 2017-03-01 03:57:18.633 +0000 UTC containerCreation {<nil>}}
Mar 01 03:59:00 k8s-02m.mynodes.com kubelet-wrapper[1578]: I0301 03:59:00.921639 1578 manager.go:288] Starting recovery of all containers
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.211735 1624 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-rkt-pods-run-1903efd4\x2def36\x2d4781\x2d9c0a\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-dev-mqueue.mount: invalid container name
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.211747 1624 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-dev-mqueue.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.211761 1624 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-dev-mqueue.mount", but ignoring.
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.211774 1624 manager.go:867] ignoring container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-dev-mqueue.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.211808 1624 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-rkt-pods-run-1903efd4\x2def36\x2d4781\x2d9c0a\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs.mount: invalid container name
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.211815 1624 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.211827 1624 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs.mount", but ignoring.
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.211903 1624 manager.go:867] ignoring container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.211954 1624 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-kubelet-pods-bff412e3\x2dfef7\x2d11e6\x2d9da9\x2d0050569c0abc-volumes-kubernetes.io\x7esecret-default\x2dtoken\x2dllbgd.mount: invalid container name
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.211962 1624 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-kubelet-pods-bff412e3\\x2dfef7\\x2d11e6\\x2d9da9\\x2d0050569c0abc-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dllbgd.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.211976 1624 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-kubelet-pods-bff412e3\\x2dfef7\\x2d11e6\\x2d9da9\\x2d0050569c0abc-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dllbgd.mount", but ignoring.
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.211990 1624 manager.go:867] ignoring container "/system.slice/var-lib-kubelet-pods-bff412e3\\x2dfef7\\x2d11e6\\x2d9da9\\x2d0050569c0abc-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dllbgd.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.212006 1624 factory.go:104] Error trying to work out if we can handle /system.slice/boot.mount: invalid container name
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.212019 1624 factory.go:115] Factory "docker" was unable to handle container "/system.slice/boot.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.212034 1624 factory.go:108] Factory "systemd" can handle container "/system.slice/boot.mount", but ignoring.
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.212054 1624 manager.go:867] ignoring container "/system.slice/boot.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.212512 1624 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-docker-overlay-5952921d52b70300cc8a364d31113b1b2f57642050c6ec3c30f9c6ecef7c2648-merged.mount: error inspecting container: Error: No such container: 5952921d52b70300cc8a364d31113b1b2f57642050c6ec3c30f9c6ecef7c2648
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.212526 1624 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-overlay-5952921d52b70300cc8a364d31113b1b2f57642050c6ec3c30f9c6ecef7c2648-merged.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.212539 1624 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-docker-overlay-5952921d52b70300cc8a364d31113b1b2f57642050c6ec3c30f9c6ecef7c2648-merged.mount", but ignoring.
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.212552 1624 manager.go:867] ignoring container "/system.slice/var-lib-docker-overlay-5952921d52b70300cc8a364d31113b1b2f57642050c6ec3c30f9c6ecef7c2648-merged.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.212592 1624 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-rkt-pods-run-1903efd4\x2def36\x2d4781\x2d9c0a\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-sys-fs-cgroup-cpu\x2ccpuacct.mount: invalid container name
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.212601 1624 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-sys-fs-cgroup-cpu\\x2ccpuacct.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.212614 1624 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-sys-fs-cgroup-cpu\\x2ccpuacct.mount", but ignoring.
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.212631 1624 manager.go:867] ignoring container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-sys-fs-cgroup-cpu\\x2ccpuacct.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.212667 1624 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-rkt-pods-run-1903efd4\x2def36\x2d4781\x2d9c0a\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-sys-fs-cgroup-cpuset.mount: invalid container name
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.212675 1624 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-sys-fs-cgroup-cpuset.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.212687 1624 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-sys-fs-cgroup-cpuset.mount", but ignoring.
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.212701 1624 manager.go:867] ignoring container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-sys-fs-cgroup-cpuset.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.213141 1624 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-docker-overlay-b730c60f5a4902be60294049945038d5d82a93cecef231bdc9cdc7483173d2bf-merged.mount: error inspecting container: Error: No such container: b730c60f5a4902be60294049945038d5d82a93cecef231bdc9cdc7483173d2bf
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.213154 1624 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-overlay-b730c60f5a4902be60294049945038d5d82a93cecef231bdc9cdc7483173d2bf-merged.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.213175 1624 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-docker-overlay-b730c60f5a4902be60294049945038d5d82a93cecef231bdc9cdc7483173d2bf-merged.mount", but ignoring.
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.213200 1624 manager.go:867] ignoring container "/system.slice/var-lib-docker-overlay-b730c60f5a4902be60294049945038d5d82a93cecef231bdc9cdc7483173d2bf-merged.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.213241 1624 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-rkt-pods-run-1903efd4\x2def36\x2d4781\x2d9c0a\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-sys-fs-cgroup-memory.mount: invalid container name
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.213253 1624 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-sys-fs-cgroup-memory.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.213267 1624 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-sys-fs-cgroup-memory.mount", but ignoring.
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.213280 1624 manager.go:867] ignoring container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-sys-fs-cgroup-memory.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.213697 1624 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-docker-overlay-fa54dc44af0667c2bf27e98eb8d2dd7e2f83d717476e0141dcb5aaa359551d80-merged.mount: error inspecting container: Error: No such container: fa54dc44af0667c2bf27e98eb8d2dd7e2f83d717476e0141dcb5aaa359551d80
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.213713 1624 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-overlay-fa54dc44af0667c2bf27e98eb8d2dd7e2f83d717476e0141dcb5aaa359551d80-merged.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.213726 1624 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-docker-overlay-fa54dc44af0667c2bf27e98eb8d2dd7e2f83d717476e0141dcb5aaa359551d80-merged.mount", but ignoring.
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.213739 1624 manager.go:867] ignoring container "/system.slice/var-lib-docker-overlay-fa54dc44af0667c2bf27e98eb8d2dd7e2f83d717476e0141dcb5aaa359551d80-merged.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.213774 1624 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-rkt-pods-run-1903efd4\x2def36\x2d4781\x2d9c0a\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-sys.mount: invalid container name
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.213782 1624 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-sys.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.213794 1624 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-sys.mount", but ignoring.
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.213806 1624 manager.go:867] ignoring container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-sys.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.213835 1624 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-rkt-pods-run-1903efd4\x2def36\x2d4781\x2d9c0a\x2d57e6c93ce27e-stage1-rootfs.mount: invalid container name
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.213842 1624 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.213853 1624 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs.mount", but ignoring.
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.213893 1624 manager.go:867] ignoring container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.213942 1624 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-rkt-pods-run-1903efd4\x2def36\x2d4781\x2d9c0a\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-etc-resolv.conf.mount: invalid container name
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.213952 1624 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-etc-resolv.conf.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.213965 1624 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-etc-resolv.conf.mount", but ignoring.
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.213979 1624 manager.go:867] ignoring container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-etc-resolv.conf.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.214016 1624 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-rkt-pods-run-1903efd4\x2def36\x2d4781\x2d9c0a\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-sys-kernel-security.mount: invalid container name
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.214026 1624 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-sys-kernel-security.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.214048 1624 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-sys-kernel-security.mount", but ignoring.
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.214076 1624 manager.go:867] ignoring container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-sys-kernel-security.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.214117 1624 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-rkt-pods-run-1903efd4\x2def36\x2d4781\x2d9c0a\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-var-lib-kubelet.mount: invalid container name
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.214124 1624 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-var-lib-kubelet.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.214137 1624 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-var-lib-kubelet.mount", but ignoring.
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.214151 1624 manager.go:867] ignoring container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-var-lib-kubelet.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.214170 1624 factory.go:104] Error trying to work out if we can handle /system.slice/run-docker-netns-829799c8a5f8.mount: invalid container name
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.214177 1624 factory.go:115] Factory "docker" was unable to handle container "/system.slice/run-docker-netns-829799c8a5f8.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.214186 1624 factory.go:108] Factory "systemd" can handle container "/system.slice/run-docker-netns-829799c8a5f8.mount", but ignoring.
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.214196 1624 manager.go:867] ignoring container "/system.slice/run-docker-netns-829799c8a5f8.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.214650 1624 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-docker-overlay-5768503ccba601730ee4680e88fdad402b28cce6745fc25fabbbaab6d573b267-merged.mount: error inspecting container: Error: No such container: 5768503ccba601730ee4680e88fdad402b28cce6745fc25fabbbaab6d573b267
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.214668 1624 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-overlay-5768503ccba601730ee4680e88fdad402b28cce6745fc25fabbbaab6d573b267-merged.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.214689 1624 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-docker-overlay-5768503ccba601730ee4680e88fdad402b28cce6745fc25fabbbaab6d573b267-merged.mount", but ignoring.
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.214703 1624 manager.go:867] ignoring container "/system.slice/var-lib-docker-overlay-5768503ccba601730ee4680e88fdad402b28cce6745fc25fabbbaab6d573b267-merged.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.214763 1624 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-rkt-pods-run-1903efd4\x2def36\x2d4781\x2d9c0a\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-var-lib-kubelet-pods-bff412e3\x2dfef7\x2d11e6\x2d9da9\x2d0050569c0abc-volumes-kubernetes.io\x7esecret-default\x2dtoken\x2dllbgd.mount: invalid container name
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.214774 1624 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-var-lib-kubelet-pods-bff412e3\\x2dfef7\\x2d11e6\\x2d9da9\\x2d0050569c0abc-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dllbgd.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.214789 1624 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-var-lib-kubelet-pods-bff412e3\\x2dfef7\\x2d11e6\\x2d9da9\\x2d0050569c0abc-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dllbgd.mount", but ignoring.
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.214807 1624 manager.go:867] ignoring container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-var-lib-kubelet-pods-bff412e3\\x2dfef7\\x2d11e6\\x2d9da9\\x2d0050569c0abc-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2dllbgd.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.214845 1624 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-rkt-pods-run-1903efd4\x2def36\x2d4781\x2d9c0a\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-dev-pts.mount: invalid container name
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.214853 1624 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-dev-pts.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.214899 1624 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-dev-pts.mount", but ignoring.
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.214914 1624 manager.go:867] ignoring container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-dev-pts.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.214950 1624 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-rkt-pods-run-1903efd4\x2def36\x2d4781\x2d9c0a\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-sys-fs-pstore.mount: invalid container name
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.214958 1624 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-sys-fs-pstore.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.214970 1624 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-sys-fs-pstore.mount", but ignoring.
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.214984 1624 manager.go:867] ignoring container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-sys-fs-pstore.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.215022 1624 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-kubelet-pods-75640253\x2dfcf4\x2d11e6\x2d824e\x2d0050569c46ca-volumes-kubernetes.io\x7esecret-default\x2dtoken\x2djb4h6.mount: invalid container name
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.215029 1624 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-kubelet-pods-75640253\\x2dfcf4\\x2d11e6\\x2d824e\\x2d0050569c46ca-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2djb4h6.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.215042 1624 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-kubelet-pods-75640253\\x2dfcf4\\x2d11e6\\x2d824e\\x2d0050569c46ca-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2djb4h6.mount", but ignoring.
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.215055 1624 manager.go:867] ignoring container "/system.slice/var-lib-kubelet-pods-75640253\\x2dfcf4\\x2d11e6\\x2d824e\\x2d0050569c46ca-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2djb4h6.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.215114 1624 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-rkt-pods-run-1903efd4\x2def36\x2d4781\x2d9c0a\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-var-lib-kubelet-pods-75640253\x2dfcf4\x2d11e6\x2d824e\x2d0050569c46ca-volumes-kubernetes.io\x7esecret-default\x2dtoken\x2djb4h6.mount: invalid container name
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.215122 1624 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-var-lib-kubelet-pods-75640253\\x2dfcf4\\x2d11e6\\x2d824e\\x2d0050569c46ca-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2djb4h6.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.215137 1624 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-var-lib-kubelet-pods-75640253\\x2dfcf4\\x2d11e6\\x2d824e\\x2d0050569c46ca-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2djb4h6.mount", but ignoring.
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.215154 1624 manager.go:867] ignoring container "/system.slice/var-lib-rkt-pods-run-1903efd4\\x2def36\\x2d4781\\x2d9c0a\\x2d57e6c93ce27e-stage1-rootfs-opt-stage2-hyperkube-rootfs-var-lib-kubelet-pods-75640253\\x2dfcf4\\x2d11e6\\x2d824e\\x2d0050569c46ca-volumes-kubernetes.io\\x7esecret-default\\x2dtoken\\x2djb4h6.mount"
Mar 02 03:25:44 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:44.215170 1624 manager.go:349] Global Housekeeping(1488425144) took 115.811303ms
Mar 02 03:25:45 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:45.666949 1624 kubelet.go:1858] SyncLoop (housekeeping)
Mar 02 03:25:47 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:47.666965 1624 kubelet.go:1858] SyncLoop (housekeeping)
Mar 02 03:25:48 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:48.668447 1624 prober.go:159] HTTP-Probe Host: http://127.0.0.1, Port: 8080, Path: /healthz
Mar 02 03:25:48 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:48.668495 1624 prober.go:162] HTTP-Probe Headers: map[]
Mar 02 03:25:48 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:48.668654 1624 prober.go:159] HTTP-Probe Host: http://127.0.0.1, Port: 10252, Path: /healthz
Mar 02 03:25:48 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:48.668677 1624 prober.go:162] HTTP-Probe Headers: map[]
Mar 02 03:25:48 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:48.669152 1624 prober.go:159] HTTP-Probe Host: http://127.0.0.1, Port: 10251, Path: /healthz
Mar 02 03:25:48 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:48.669171 1624 prober.go:162] HTTP-Probe Headers: map[]
Mar 02 03:25:48 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:48.669986 1624 http.go:82] Probe succeeded for http://127.0.0.1:10251/healthz, Response: {200 OK 200 HTTP/1.1 1 1map[Date:[Thu, 02 Mar 2017 03:25:48 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc42254e6c0 2 [] true false map[] 0xc420f9a3c0 <nil>}
Mar 02 03:25:48 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:48.670043 1624 prober.go:113] Liveness probe for "kube-scheduler-k8s-02m.mynodes.com_kube-system(35b2d362a7c0fc07449c0f9ca6ec01f6):kube-scheduler" succeeded
Mar 02 03:25:48 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:48.671229 1624 http.go:82] Probe succeeded for http://127.0.0.1:10252/healthz, Response: {200 OK 200 HTTP/1.1 1 1map[Content-Length:[2] Content-Type:[text/plain; charset=utf-8] Date:[Thu, 02 Mar 2017 03:25:48 GMT]] 0xc42254e960 2 [] true false map[] 0xc420fde3c0 <nil>}
Mar 02 03:25:48 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:48.671261 1624 prober.go:113] Liveness probe for "kube-controller-manager-k8s-02m.mynodes.com_kube-system(426568176546749e80563ce32ceaf77c):kube-controller-manager" succeeded
Mar 02 03:25:48 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:48.671502 1624 http.go:82] Probe succeeded for http://127.0.0.1:8080/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Type:[text/plain; charset=utf-8] Date:[Thu, 02 Mar 2017 03:25:48 GMT] Content-Length:[2]] 0xc42254eaa0 2 [] true false map[] 0xc420f9a0f0 <nil>}
Mar 02 03:25:48 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:48.671537 1624 prober.go:113] Liveness probe for "kube-apiserver-k8s-02m.mynodes.com_kube-system(124b9d46373e8402da7f79f958c305b8):kube-apiserver" succeeded
Mar 02 03:25:49 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:49.666959 1624 kubelet.go:1858] SyncLoop (housekeeping)
Mar 02 03:25:51 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:51.186270 1624 conversion.go:134] failed to handle multiple devices for container. Skipping Filesystem stats
Mar 02 03:25:51 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:51.186965 1624 conversion.go:134] failed to handle multiple devices for container. Skipping Filesystem stats
Mar 02 03:25:51 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:51.238914 1624 summary.go:383] Missing default interface "eth0" for node:k8s-02m.mynodes.com
Mar 02 03:25:51 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:51.238980 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_kube-controller-manager-k8s-02m.mynodes.com
Mar 02 03:25:51 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:51.239022 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_traefik-ingress-controller-1ws94
Mar 02 03:25:51 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:51.239036 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_calico-node-rg69b
Mar 02 03:25:51 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:51.239047 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_kube-proxy-k8s-02m.mynodes.com
Mar 02 03:25:51 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:51.239078 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_kube-apiserver-k8s-02m.mynodes.com
Mar 02 03:25:51 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:51.239090 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_kube-scheduler-k8s-02m.mynodes.com
Mar 02 03:25:51 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:51.239129 1624 eviction_manager.go:272] eviction manager: no resources are starved
Mar 02 03:25:51 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:51.667065 1624 kubelet.go:1858] SyncLoop (housekeeping)
Mar 02 03:25:52 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:52.667021 1624 kubelet.go:1835] SyncLoop (SYNC): 2 pods; kube-controller-manager-k8s-02m.mynodes.com_kube-system(426568176546749e80563ce32ceaf77c), kube-scheduler-k8s-02m.mynodes.com_kube-system(35b2d362a7c0fc07449c0f9ca6ec01f6)
Mar 02 03:25:52 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:52.667161 1624 kubelet_pods.go:1029] Generating status for "kube-scheduler-k8s-02m.mynodes.com_kube-system(35b2d362a7c0fc07449c0f9ca6ec01f6)"
Mar 02 03:25:52 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:52.667214 1624 kubelet_pods.go:1029] Generating status for "kube-controller-manager-k8s-02m.mynodes.com_kube-system(426568176546749e80563ce32ceaf77c)"
Mar 02 03:25:52 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:52.667387 1624 status_manager.go:312] Ignoring same status for pod "kube-scheduler-k8s-02m.mynodes.com_kube-system(35b2d362a7c0fc07449c0f9ca6ec01f6)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-01 03:12:30 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-01 18:26:57 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-01 18:26:48 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.119.55.171 PodIP:10.119.55.171 StartTime:2017-03-01 03:12:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-scheduler State:{Waiting:<nil> Running:0xc4225e0280 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc4218cb880} Ready:true RestartCount:3 Image:artifacts.mynodes.com:8080/coreos/hyperkube:v1.5.3_coreos.0 ImageID:docker-pullable://artifacts.mynodes.com:8080/coreos/hyperkube@sha256:60fa8c3f06d0a47bb1be8c20ec6c147e973326ee2f0f37b98aaea7e46d9055df ContainerID:docker://6a4c63fb1059da745644b01768a6810322123042204d89767befc51ed8b7a43f}]}
Mar 02 03:25:52 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:52.667562 1624 docker_manager.go:1961] Found pod infra container for "kube-scheduler-k8s-02m.mynodes.com_kube-system(35b2d362a7c0fc07449c0f9ca6ec01f6)"
Mar 02 03:25:52 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:52.667839 1624 status_manager.go:312] Ignoring same status for pod "kube-controller-manager-k8s-02m.mynodes.com_kube-system(426568176546749e80563ce32ceaf77c)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-01 03:12:30 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-01 18:26:57 +0000 UTC Reason: Message:} {Type:PodScheduled Status:TrueLastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-01 18:26:48 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.119.55.171 PodIP:10.119.55.171 StartTime:2017-03-01 03:12:30+0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-controller-manager State:{Waiting:<nil> Running:0xc4225e0ba0 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc4218cbb20} Ready:true RestartCount:5 Image:artifacts.mynodes.com:8080/coreos/hyperkube:v1.5.3_coreos.0 ImageID:docker-pullable://artifacts.mynodes.com:8080/coreos/hyperkube@sha256:60fa8c3f06d0a47bb1be8c20ec6c147e973326ee2f0f37b98aaea7e46d9055df ContainerID:docker://7e24bdec8845597239d3288ef1ed6250997d884a58cc8a07ff8159263a00a03e}]}
Mar 02 03:25:52 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:52.668028 1624 volume_manager.go:338] Waiting for volumes to attach and mount for pod "kube-controller-manager-k8s-02m.mynodes.com_kube-system(426568176546749e80563ce32ceaf77c)"
Mar 02 03:25:52 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:52.669558 1624 docker_manager.go:1974] Pod infra container looks good, keep it "kube-scheduler-k8s-02m.mynodes.com_kube-system(35b2d362a7c0fc07449c0f9ca6ec01f6)"
Mar 02 03:25:52 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:52.669609 1624 docker_manager.go:2022] pod "kube-scheduler-k8s-02m.mynodes.com_kube-system(35b2d362a7c0fc07449c0f9ca6ec01f6)" container "kube-scheduler" exists as 6a4c63fb1059da745644b01768a6810322123042204d89767befc51ed8b7a43f
Mar 02 03:25:52 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:52.669720 1624 docker_manager.go:2109] Got container changes for pod "kube-scheduler-k8s-02m.mynodes.com_kube-system(35b2d362a7c0fc07449c0f9ca6ec01f6)": {StartInfraContainer:false InfraChanged:false InfraContainerId:6b446649d108273bee480ca00754a70bcb10ae87e0abfa1e52a7e005f15c1767 InitFailed:false InitContainersToKeep:map[] ContainersToStart:map[] ContainersToKeep:map[6b446649d108273bee480ca00754a70bcb10ae87e0abfa1e52a7e005f15c1767:-1 6a4c63fb1059da745644b01768a6810322123042204d89767befc51ed8b7a43f:0]}
Mar 02 03:25:52 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:52.968257 1624 volume_manager.go:367] All volumes are attached and mounted for pod "kube-controller-manager-k8s-02m.mynodes.com_kube-system(426568176546749e80563ce32ceaf77c)"
Mar 02 03:25:52 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:52.968318 1624 docker_manager.go:1961] Found pod infra container for "kube-controller-manager-k8s-02m.mynodes.com_kube-system(426568176546749e80563ce32ceaf77c)"
Mar 02 03:25:52 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:52.970277 1624 docker_manager.go:1974] Pod infra container looks good, keep it "kube-controller-manager-k8s-02m.mynodes.com_kube-system(426568176546749e80563ce32ceaf77c)"
Mar 02 03:25:52 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:52.970320 1624 docker_manager.go:2022] pod "kube-controller-manager-k8s-02m.mynodes.com_kube-system(426568176546749e80563ce32ceaf77c)" container "kube-controller-manager" exists as 7e24bdec8845597239d3288ef1ed6250997d884a58cc8a07ff8159263a00a03e
Mar 02 03:25:52 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:52.970530 1624 docker_manager.go:2109] Got container changes for pod "kube-controller-manager-k8s-02m.mynodes.com_kube-system(426568176546749e80563ce32ceaf77c)": {StartInfraContainer:false InfraChanged:false InfraContainerId:4ea23fd63707be10e2fa8fa603ac8d31d72234e041abd614544050e7fc9af914 InitFailed:false InitContainersToKeep:map[] ContainersToStart:map[] ContainersToKeep:map[4ea23fd63707be10e2fa8fa603ac8d31d72234e041abd614544050e7fc9af914:-1 7e24bdec8845597239d3288ef1ed6250997d884a58cc8a07ff8159263a00a03e:0]}
Mar 02 03:25:53 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:53.667057 1624 kubelet.go:1858] SyncLoop (housekeeping)
Mar 02 03:25:55 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:55.667230 1624 kubelet.go:1858] SyncLoop (housekeeping)
Mar 02 03:25:56 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:56.667013 1624 kubelet.go:1835] SyncLoop (SYNC): 1 pods; calico-node-rg69b_kube-system(75640253-fcf4-11e6-824e-0050569c46ca)
Mar 02 03:25:56 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:56.667260 1624 kubelet_pods.go:1029] Generating status for "calico-node-rg69b_kube-system(75640253-fcf4-11e6-824e-0050569c46ca)"
Mar 02 03:25:56 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:56.667844 1624 status_manager.go:312] Ignoring same status for pod "calico-node-rg69b_kube-system(75640253-fcf4-11e6-824e-0050569c46ca)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-02-27 13:55:55 +0000 UTC Reason: Message:}{Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-01 18:27:26 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00+0000 UTC LastTransitionTime:2017-02-27 13:56:01 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.119.55.171 PodIP:10.119.55.171 StartTime:2017-02-27 13:55:55 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:calico-node State:{Waiting:<nil> Running:0xc422551940 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc421727e30} Ready:true RestartCount:2 Image:artifacts.mynodes.com:8080/calico/node:v1.0.2 ImageID:docker-pullable://artifacts.mynodes.com:8080/calico/node@sha256:5e747757b5c9eff5db7e7f1420f2b437f97f47d1301a15ad787070dd029e2b5c ContainerID:docker://4217e797dbb97805befd0da445c04d740d95da216ec9a62aa343727786e24e51} {Name:install-cni State:{Waiting:<nil> Running:0xc422551960 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc421727f10} Ready:true RestartCount:2 Image:artifacts.mynodes.com:8080/calico/cni:v1.5.6 ImageID:docker-pullable://artifacts.mynodes.com:8080/calico/cni@sha256:d0e07c19aac1cc84278decd7e24e8cc1842bfd2332037c652c81f525d555052d ContainerID:docker://4882d40d5102667aed95b0140328481ca6941d288bcd471e2f6f8c56d012dc0d}]}
Mar 02 03:25:56 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:56.668121 1624 volume_manager.go:338] Waiting for volumes to attach and mount for pod "calico-node-rg69b_kube-system(75640253-fcf4-11e6-824e-0050569c46ca)"
Mar 02 03:25:56 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:56.712283 1624 secret.go:179] Setting up volume default-token-jb4h6 for pod 75640253-fcf4-11e6-824e-0050569c46ca at /var/lib/kubelet/pods/75640253-fcf4-11e6-824e-0050569c46ca/volumes/kubernetes.io~secret/default-token-jb4h6
Mar 02 03:25:56 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:56.716647 1624 secret.go:206] Received secret kube-system/default-token-jb4h6 containing (3) pieces of data, 2002total bytes
Mar 02 03:25:56 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:56.717236 1624 atomic_writer.go:142] pod kube-system/calico-node-rg69b volume default-token-jb4h6: no update required for target directory /var/lib/kubelet/pods/75640253-fcf4-11e6-824e-0050569c46ca/volumes/kubernetes.io~secret/default-token-jb4h6
Mar 02 03:25:56 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:56.717268 1624 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/75640253-fcf4-11e6-824e-0050569c46ca-default-token-jb4h6" (spec.Name: "default-token-jb4h6") pod "75640253-fcf4-11e6-824e-0050569c46ca" (UID: "75640253-fcf4-11e6-824e-0050569c46ca").
Mar 02 03:25:56 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:56.968506 1624 volume_manager.go:367] All volumes are attached and mounted for pod "calico-node-rg69b_kube-system(75640253-fcf4-11e6-824e-0050569c46ca)"
Mar 02 03:25:56 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:56.969634 1624 docker_manager.go:1961] Found pod infra container for "calico-node-rg69b_kube-system(75640253-fcf4-11e6-824e-0050569c46ca)"
Mar 02 03:25:56 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:56.971610 1624 docker_manager.go:1974] Pod infra container looks good, keep it "calico-node-rg69b_kube-system(75640253-fcf4-11e6-824e-0050569c46ca)"
Mar 02 03:25:56 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:56.972161 1624 docker_manager.go:2022] pod "calico-node-rg69b_kube-system(75640253-fcf4-11e6-824e-0050569c46ca)" container "calico-node" exists as 4217e797dbb97805befd0da445c04d740d95da216ec9a62aa343727786e24e51
Mar 02 03:25:56 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:56.972839 1624 docker_manager.go:2022] pod "calico-node-rg69b_kube-system(75640253-fcf4-11e6-824e-0050569c46ca)" container "install-cni" exists as 4882d40d5102667aed95b0140328481ca6941d288bcd471e2f6f8c56d012dc0d
Mar 02 03:25:56 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:56.973511 1624 docker_manager.go:2109] Got container changes for pod "calico-node-rg69b_kube-system(75640253-fcf4-11e6-824e-0050569c46ca)": {StartInfraContainer:false InfraChanged:false InfraContainerId:8f19f9ff8ee4dc6d31ed1526379ed622821194c59411873b468790088e86fb04 InitFailed:false InitContainersToKeep:map[] ContainersToStart:map[] ContainersToKeep:map[8f19f9ff8ee4dc6d31ed1526379ed622821194c59411873b468790088e86fb04:-1 4217e797dbb97805befd0da445c04d740d95da216ec9a62aa343727786e24e51:0 4882d40d5102667aed95b0140328481ca6941d288bcd471e2f6f8c56d012dc0d:1]}
Mar 02 03:25:57 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:57.667040 1624 kubelet.go:1858] SyncLoop (housekeeping)
Mar 02 03:25:58 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:58.668405 1624 prober.go:159] HTTP-Probe Host: http://127.0.0.1, Port: 8080, Path: /healthz
Mar 02 03:25:58 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:58.668473 1624 prober.go:162] HTTP-Probe Headers: map[]
Mar 02 03:25:58 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:58.668768 1624 prober.go:159] HTTP-Probe Host: http://127.0.0.1, Port: 10252, Path: /healthz
Mar 02 03:25:58 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:58.668828 1624 prober.go:162] HTTP-Probe Headers: map[]
Mar 02 03:25:58 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:58.669222 1624 prober.go:159] HTTP-Probe Host: http://127.0.0.1, Port: 10251, Path: /healthz
Mar 02 03:25:58 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:58.669247 1624 prober.go:162] HTTP-Probe Headers: map[]
Mar 02 03:25:58 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:58.669977 1624 http.go:82] Probe succeeded for http://127.0.0.1:10252/healthz, Response: {200 OK 200 HTTP/1.1 1 1map[Date:[Thu, 02 Mar 2017 03:25:58 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc4220774e0 2 [] true false map[] 0xc420f9b0e0 <nil>}
Mar 02 03:25:58 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:58.670049 1624 prober.go:113] Liveness probe for "kube-controller-manager-k8s-02m.mynodes.com_kube-system(426568176546749e80563ce32ceaf77c):kube-controller-manager" succeeded
Mar 02 03:25:58 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:58.670215 1624 http.go:82] Probe succeeded for http://127.0.0.1:10251/healthz, Response: {200 OK 200 HTTP/1.1 1 1map[Content-Type:[text/plain; charset=utf-8] Date:[Thu, 02 Mar 2017 03:25:58 GMT] Content-Length:[2]] 0xc4213c4c00 2 [] true false map[] 0xc42123e870 <nil>}
Mar 02 03:25:58 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:58.670285 1624 prober.go:113] Liveness probe for "kube-scheduler-k8s-02m.mynodes.com_kube-system(35b2d362a7c0fc07449c0f9ca6ec01f6):kube-scheduler" succeeded
Mar 02 03:25:58 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:58.670911 1624 http.go:82] Probe succeeded for http://127.0.0.1:8080/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Length:[2] Content-Type:[text/plain; charset=utf-8] Date:[Thu, 02 Mar 2017 03:25:58 GMT]] 0xc4213c4d40 2 [] true false map[] 0xc4211b80f0 <nil>}
Mar 02 03:25:58 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:58.670970 1624 prober.go:113] Liveness probe for "kube-apiserver-k8s-02m.mynodes.com_kube-system(124b9d46373e8402da7f79f958c305b8):kube-apiserver" succeeded
Mar 02 03:25:59 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:25:59.667076 1624 kubelet.go:1858] SyncLoop (housekeeping)
Mar 02 03:26:01 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:01.265012 1624 conversion.go:134] failed to handle multiple devices for container. Skipping Filesystem stats
Mar 02 03:26:01 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:01.265061 1624 conversion.go:134] failed to handle multiple devices for container. Skipping Filesystem stats
Mar 02 03:26:01 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:01.328974 1624 summary.go:383] Missing default interface "eth0" for node:k8s-02m.mynodes.com
Mar 02 03:26:01 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:01.329057 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_kube-scheduler-k8s-02m.mynodes.com
Mar 02 03:26:01 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:01.329084 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_kube-controller-manager-k8s-02m.mynodes.com
Mar 02 03:26:01 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:01.329138 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_calico-node-rg69b
Mar 02 03:26:01 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:01.329173 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_kube-proxy-k8s-02m.mynodes.com
Mar 02 03:26:01 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:01.329204 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_kube-apiserver-k8s-02m.mynodes.com
Mar 02 03:26:01 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:01.329221 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_traefik-ingress-controller-1ws94
Mar 02 03:26:01 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:01.329273 1624 eviction_manager.go:272] eviction manager: no resources are starved
Mar 02 03:26:01 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:01.667158 1624 kubelet.go:1858] SyncLoop (housekeeping)
Mar 02 03:26:03 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:03.666953 1624 kubelet.go:1858] SyncLoop (housekeeping)
Mar 02 03:26:05 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:05.294239 1624 server.go:220] Checking API server for new Kubelet configuration.
Mar 02 03:26:05 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:05.298370 1624 server.go:229] Did not find a configuration for this Kubelet via API server: cloud provider was nil, and attempt to use hostname to find config resulted in: configmaps "kubelet-k8s-02m.mynodes.com" not found
Mar 02 03:26:05 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:05.666958 1624 kubelet.go:1858] SyncLoop (housekeeping)
Mar 02 03:26:07 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:07.666981 1624 kubelet.go:1858] SyncLoop (housekeeping)
Mar 02 03:26:08 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:08.668384 1624 prober.go:159] HTTP-Probe Host: http://127.0.0.1, Port: 8080, Path: /healthz
Mar 02 03:26:08 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:08.669365 1624 prober.go:162] HTTP-Probe Headers: map[]
Mar 02 03:26:08 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:08.669420 1624 prober.go:159] HTTP-Probe Host: http://127.0.0.1, Port: 10252, Path: /healthz
Mar 02 03:26:08 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:08.670111 1624 prober.go:162] HTTP-Probe Headers: map[]
Mar 02 03:26:08 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:08.669420 1624 prober.go:159] HTTP-Probe Host: http://127.0.0.1, Port: 10251, Path: /healthz
Mar 02 03:26:08 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:08.670227 1624 prober.go:162] HTTP-Probe Headers: map[]
Mar 02 03:26:08 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:08.671631 1624 http.go:82] Probe succeeded for http://127.0.0.1:10251/healthz, Response: {200 OK 200 HTTP/1.1 1 1map[Content-Length:[2] Content-Type:[text/plain; charset=utf-8] Date:[Thu, 02 Mar 2017 03:26:08 GMT]] 0xc422017920 2 [] true false map[] 0xc42123e0f0 <nil>}
Mar 02 03:26:08 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:08.671685 1624 prober.go:113] Liveness probe for "kube-scheduler-k8s-02m.mynodes.com_kube-system(35b2d362a7c0fc07449c0f9ca6ec01f6):kube-scheduler" succeeded
Mar 02 03:26:08 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:08.671667 1624 http.go:82] Probe succeeded for http://127.0.0.1:10252/healthz, Response: {200 OK 200 HTTP/1.1 1 1map[Date:[Thu, 02 Mar 2017 03:26:08 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc420dcf960 2 [] true false map[] 0xc420f9a2d0 <nil>}
Mar 02 03:26:08 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:08.671729 1624 prober.go:113] Liveness probe for "kube-controller-manager-k8s-02m.mynodes.com_kube-system(426568176546749e80563ce32ceaf77c):kube-controller-manager" succeeded
Mar 02 03:26:08 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:08.672938 1624 http.go:82] Probe succeeded for http://127.0.0.1:8080/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Type:[text/plain; charset=utf-8] Date:[Thu, 02 Mar 2017 03:26:08 GMT] Content-Length:[2]] 0xc4227a6be0 2 [] true false map[] 0xc4200e2690 <nil>}
Mar 02 03:26:08 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:08.673037 1624 prober.go:113] Liveness probe for "kube-apiserver-k8s-02m.mynodes.com_kube-system(124b9d46373e8402da7f79f958c305b8):kube-apiserver" succeeded
Mar 02 03:26:09 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:09.667047 1624 kubelet.go:1858] SyncLoop (housekeeping)
Mar 02 03:26:11 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:11.349238 1624 conversion.go:134] failed to handle multiple devices for container. Skipping Filesystem stats
Mar 02 03:26:11 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:11.350005 1624 conversion.go:134] failed to handle multiple devices for container. Skipping Filesystem stats
Mar 02 03:26:11 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:11.395542 1624 summary.go:383] Missing default interface "eth0" for node:k8s-02m.mynodes.com
Mar 02 03:26:11 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:11.396292 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_kube-apiserver-k8s-02m.mynodes.com
Mar 02 03:26:11 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:11.396847 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_calico-node-rg69b
Mar 02 03:26:11 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:11.397354 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_kube-controller-manager-k8s-02m.mynodes.com
Mar 02 03:26:11 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:11.397891 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_kube-scheduler-k8s-02m.mynodes.com
Mar 02 03:26:11 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:11.398375 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_traefik-ingress-controller-1ws94
Mar 02 03:26:11 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:11.398847 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_kube-proxy-k8s-02m.mynodes.com
Mar 02 03:26:11 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:11.399360 1624 eviction_manager.go:272] eviction manager: no resources are starved
Mar 02 03:26:11 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:11.667040 1624 kubelet.go:1858] SyncLoop (housekeeping)
Mar 02 03:26:13 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:13.667209 1624 kubelet.go:1858] SyncLoop (housekeeping)
Mar 02 03:26:14 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:14.108167 1624 kubelet.go:1138] Container garbage collection succeeded
Mar 02 03:26:15 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:15.667130 1624 kubelet.go:1858] SyncLoop (housekeeping)
Mar 02 03:26:17 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:17.667014 1624 kubelet.go:1858] SyncLoop (housekeeping)
Mar 02 03:26:18 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:18.668567 1624 prober.go:159] HTTP-Probe Host: http://127.0.0.1, Port: 8080, Path: /healthz
Mar 02 03:26:18 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:18.668640 1624 prober.go:162] HTTP-Probe Headers: map[]
Mar 02 03:26:18 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:18.668783 1624 prober.go:159] HTTP-Probe Host: http://127.0.0.1, Port: 10252, Path: /healthz
Mar 02 03:26:18 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:18.668802 1624 prober.go:162] HTTP-Probe Headers: map[]
Mar 02 03:26:18 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:18.669292 1624 prober.go:159] HTTP-Probe Host: http://127.0.0.1, Port: 10251, Path: /healthz
Mar 02 03:26:18 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:18.669320 1624 prober.go:162] HTTP-Probe Headers: map[]
Mar 02 03:26:18 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:18.670943 1624 http.go:82] Probe succeeded for http://127.0.0.1:10252/healthz, Response: {200 OK 200 HTTP/1.1 1 1map[Date:[Thu, 02 Mar 2017 03:26:18 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc421b9c560 2 [] true false map[] 0xc42123ea50 <nil>}
Mar 02 03:26:18 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:18.671001 1624 prober.go:113] Liveness probe for "kube-controller-manager-k8s-02m.mynodes.com_kube-system(426568176546749e80563ce32ceaf77c):kube-controller-manager" succeeded
Mar 02 03:26:18 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:18.671293 1624 http.go:82] Probe succeeded for http://127.0.0.1:8080/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Thu, 02 Mar 2017 03:26:18 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc421b9c6e0 2 [] true false map[] 0xc4211b84b0 <nil>}
Mar 02 03:26:18 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:18.671342 1624 prober.go:113] Liveness probe for "kube-apiserver-k8s-02m.mynodes.com_kube-system(124b9d46373e8402da7f79f958c305b8):kube-apiserver" succeeded
Mar 02 03:26:18 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:18.671603 1624 http.go:82] Probe succeeded for http://127.0.0.1:10251/healthz, Response: {200 OK 200 HTTP/1.1 1 1map[Date:[Thu, 02 Mar 2017 03:26:18 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc4213845a0 2 [] true false map[] 0xc420fde5a0 <nil>}
Mar 02 03:26:18 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:18.671640 1624 prober.go:113] Liveness probe for "kube-scheduler-k8s-02m.mynodes.com_kube-system(35b2d362a7c0fc07449c0f9ca6ec01f6):kube-scheduler" succeeded
Mar 02 03:26:19 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:19.667003 1624 kubelet.go:1858] SyncLoop (housekeeping)
Mar 02 03:26:21 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:21.418425 1624 conversion.go:134] failed to handle multiple devices for container. Skipping Filesystem stats
Mar 02 03:26:21 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:21.419261 1624 conversion.go:134] failed to handle multiple devices for container. Skipping Filesystem stats
Mar 02 03:26:21 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:21.482983 1624 summary.go:383] Missing default interface "eth0" for node:k8s-02m.mynodes.com
Mar 02 03:26:21 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:21.483042 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_kube-proxy-k8s-02m.mynodes.com
Mar 02 03:26:21 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:21.483073 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_kube-apiserver-k8s-02m.mynodes.com
Mar 02 03:26:21 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:21.483086 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_calico-node-rg69b
Mar 02 03:26:21 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:21.483115 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_traefik-ingress-controller-1ws94
Mar 02 03:26:21 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:21.483132 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_kube-scheduler-k8s-02m.mynodes.com
Mar 02 03:26:21 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:21.483157 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_kube-controller-manager-k8s-02m.mynodes.com
Mar 02 03:26:21 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:21.483195 1624 eviction_manager.go:272] eviction manager: no resources are starved
Mar 02 03:26:21 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:21.667019 1624 kubelet.go:1858] SyncLoop (housekeeping)
Mar 02 03:26:23 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:23.103240 1624 iptables.go:362] running iptables -N [KUBE-MARK-DROP -t nat]
Mar 02 03:26:23 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:23.108038 1624 iptables.go:362] running iptables -C [KUBE-MARK-DROP -t nat -j MARK --set-xmark 0x00008000/0x00008000]
Mar 02 03:26:23 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:23.112801 1624 iptables.go:362] running iptables -N [KUBE-FIREWALL -t filter]
Mar 02 03:26:23 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:23.116589 1624 iptables.go:362] running iptables -C [KUBE-FIREWALL -t filter -m comment --comment kubernetes firewall for dropping marked packets -m mark --mark 0x00008000/0x00008000 -j DROP]
Mar 02 03:26:23 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:23.122439 1624 iptables.go:362] running iptables -C [OUTPUT -t filter -j KUBE-FIREWALL]
Mar 02 03:26:23 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:23.126520 1624 iptables.go:362] running iptables -C [INPUT -t filter -j KUBE-FIREWALL]
Mar 02 03:26:23 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:23.131912 1624 iptables.go:362] running iptables -N [KUBE-MARK-MASQ -t nat]
Mar 02 03:26:23 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:23.135363 1624 iptables.go:362] running iptables -N [KUBE-POSTROUTING -t nat]
Mar 02 03:26:23 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:23.138970 1624 iptables.go:362] running iptables -C [KUBE-MARK-MASQ -t nat -j MARK --set-xmark 0x00004000/0x00004000]
Mar 02 03:26:23 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:23.143952 1624 iptables.go:362] running iptables -C [POSTROUTING -t nat -m comment --comment kubernetes postrouting rules -j KUBE-POSTROUTING]
Mar 02 03:26:23 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:23.147491 1624 iptables.go:362] running iptables -C [KUBE-POSTROUTING -t nat -m comment --comment kubernetes service traffic requiring SNAT -m mark --mark 0x00004000/0x00004000 -j MASQUERADE]
Mar 02 03:26:23 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:23.667036 1624 kubelet.go:1858] SyncLoop (housekeeping)
Mar 02 03:26:25 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:25.666992 1624 kubelet.go:1858] SyncLoop (housekeeping)
Mar 02 03:26:26 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:26.667069 1624 kubelet.go:1835] SyncLoop (SYNC): 1 pods; traefik-ingress-controller-1ws94_kube-system(84a7b4c6-fcf4-11e6-b70e-0050569c24f4)
Mar 02 03:26:26 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:26.668601 1624 kubelet_pods.go:1029] Generating status for "traefik-ingress-controller-1ws94_kube-system(84a7b4c6-fcf4-11e6-b70e-0050569c24f4)"
Mar 02 03:26:26 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:26.669185 1624 status_manager.go:312] Ignoring same status for pod "traefik-ingress-controller-1ws94_kube-system(84a7b4c6-fcf4-11e6-b70e-0050569c24f4)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-02-27 13:56:21 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-01 18:26:56 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-02-27 13:56:22 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.119.55.171 PodIP:10.119.55.171 StartTime:2017-02-27 13:56:21 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:traefik-ingress-lb State:{Waiting:<nil> Running:0xc420f0ac40 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc4218cbd50} Ready:true RestartCount:2 Image:artifacts.mynodes.com:8080/traefik:v1.1.2 ImageID:docker-pullable://artifacts.mynodes.com:8080/traefik@sha256:c81e1a321ec90a41987b1c8cb887606e3e632acadb3b29f5b97f0ad03c59ffd9 ContainerID:docker://85d31336c06e866230b7f5e3d1c11127028187125daa356034fd29e74674ee45}]}
Mar 02 03:26:26 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:26.669522 1624 volume_manager.go:338] Waiting for volumes to attach and mount for pod "traefik-ingress-controller-1ws94_kube-system(84a7b4c6-fcf4-11e6-b70e-0050569c24f4)"
Mar 02 03:26:26 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:26.733417 1624 secret.go:179] Setting up volume default-token-jb4h6 for pod 84a7b4c6-fcf4-11e6-b70e-0050569c24f4 at /var/lib/kubelet/pods/84a7b4c6-fcf4-11e6-b70e-0050569c24f4/volumes/kubernetes.io~secret/default-token-jb4h6
Mar 02 03:26:26 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:26.738602 1624 secret.go:206] Received secret kube-system/default-token-jb4h6 containing (3) pieces of data, 2002total bytes
Mar 02 03:26:26 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:26.738892 1624 atomic_writer.go:142] pod kube-system/traefik-ingress-controller-1ws94 volume default-token-jb4h6:no update required for target directory /var/lib/kubelet/pods/84a7b4c6-fcf4-11e6-b70e-0050569c24f4/volumes/kubernetes.io~secret/default-token-jb4h6
Mar 02 03:26:26 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:26.738922 1624 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/84a7b4c6-fcf4-11e6-b70e-0050569c24f4-default-token-jb4h6" (spec.Name: "default-token-jb4h6") pod "84a7b4c6-fcf4-11e6-b70e-0050569c24f4" (UID: "84a7b4c6-fcf4-11e6-b70e-0050569c24f4").
Mar 02 03:26:26 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:26.970114 1624 volume_manager.go:367] All volumes are attached and mounted for pod "traefik-ingress-controller-1ws94_kube-system(84a7b4c6-fcf4-11e6-b70e-0050569c24f4)"
Mar 02 03:26:26 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:26.970248 1624 docker_manager.go:1961] Found pod infra container for "traefik-ingress-controller-1ws94_kube-system(84a7b4c6-fcf4-11e6-b70e-0050569c24f4)"
Mar 02 03:26:26 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:26.973563 1624 docker_manager.go:1974] Pod infra container looks good, keep it "traefik-ingress-controller-1ws94_kube-system(84a7b4c6-fcf4-11e6-b70e-0050569c24f4)"
Mar 02 03:26:26 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:26.973783 1624 docker_manager.go:2022] pod "traefik-ingress-controller-1ws94_kube-system(84a7b4c6-fcf4-11e6-b70e-0050569c24f4)" container "traefik-ingress-lb" exists as 85d31336c06e866230b7f5e3d1c11127028187125daa356034fd29e74674ee45
Mar 02 03:26:26 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:26.973987 1624 docker_manager.go:2109] Got container changes for pod "traefik-ingress-controller-1ws94_kube-system(84a7b4c6-fcf4-11e6-b70e-0050569c24f4)": {StartInfraContainer:false InfraChanged:false InfraContainerId:adacf1733b0be21175fe31058039e548a1260b75a2572106844ecad849e8472a InitFailed:false InitContainersToKeep:map[] ContainersToStart:map[] ContainersToKeep:map[adacf1733b0be21175fe31058039e548a1260b75a2572106844ecad849e8472a:-1 85d31336c06e866230b7f5e3d1c11127028187125daa356034fd29e74674ee45:0]}
Mar 02 03:26:27 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:27.667080 1624 kubelet.go:1858] SyncLoop (housekeeping)
Mar 02 03:26:28 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:28.668450 1624 prober.go:159] HTTP-Probe Host: http://127.0.0.1, Port: 8080, Path: /healthz
Mar 02 03:26:28 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:28.668552 1624 prober.go:162] HTTP-Probe Headers: map[]
Mar 02 03:26:28 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:28.668762 1624 prober.go:159] HTTP-Probe Host: http://127.0.0.1, Port: 10252, Path: /healthz
Mar 02 03:26:28 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:28.668789 1624 prober.go:162] HTTP-Probe Headers: map[]
Mar 02 03:26:28 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:28.669298 1624 prober.go:159] HTTP-Probe Host: http://127.0.0.1, Port: 10251, Path: /healthz
Mar 02 03:26:28 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:28.669367 1624 prober.go:162] HTTP-Probe Headers: map[]
Mar 02 03:26:28 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:28.670025 1624 http.go:82] Probe succeeded for http://127.0.0.1:10252/healthz, Response: {200 OK 200 HTTP/1.1 1 1map[Date:[Thu, 02 Mar 2017 03:26:28 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc421382140 2 [] true false map[] 0xc420fd61e0 <nil>}
Mar 02 03:26:28 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:28.670137 1624 prober.go:113] Liveness probe for "kube-controller-manager-k8s-02m.mynodes.com_kube-system(426568176546749e80563ce32ceaf77c):kube-controller-manager" succeeded
Mar 02 03:26:28 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:28.671455 1624 http.go:82] Probe succeeded for http://127.0.0.1:10251/healthz, Response: {200 OK 200 HTTP/1.1 1 1map[Date:[Thu, 02 Mar 2017 03:26:28 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc421eba7c0 2 [] true false map[] 0xc420fd63c0 <nil>}
Mar 02 03:26:28 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:28.671533 1624 prober.go:113] Liveness probe for "kube-scheduler-k8s-02m.mynodes.com_kube-system(35b2d362a7c0fc07449c0f9ca6ec01f6):kube-scheduler" succeeded
Mar 02 03:26:28 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:28.671733 1624 http.go:82] Probe succeeded for http://127.0.0.1:8080/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Thu, 02 Mar 2017 03:26:28 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc421ebab00 2 [] true false map[] 0xc4211b80f0 <nil>}
Mar 02 03:26:28 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:28.671798 1624 prober.go:113] Liveness probe for "kube-apiserver-k8s-02m.mynodes.com_kube-system(124b9d46373e8402da7f79f958c305b8):kube-apiserver" succeeded
Mar 02 03:26:29 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:29.667091 1624 kubelet.go:1835] SyncLoop (SYNC): 1 pods; kube-proxy-k8s-02m.mynodes.com_kube-system(465d9f3027710c37c6a92a2ac6a0cd59)
Mar 02 03:26:29 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:29.667184 1624 kubelet.go:1858] SyncLoop (housekeeping)
Mar 02 03:26:29 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:29.667506 1624 kubelet_pods.go:1029] Generating status for "kube-proxy-k8s-02m.mynodes.com_kube-system(465d9f3027710c37c6a92a2ac6a0cd59)"
Mar 02 03:26:29 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:29.667747 1624 status_manager.go:312] Ignoring same status for pod "kube-proxy-k8s-02m.mynodes.com_kube-system(465d9f3027710c37c6a92a2ac6a0cd59)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-01 03:29:38 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-01 18:26:56 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-01 18:26:48 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.119.55.171 PodIP:10.119.55.171 StartTime:2017-03-01 03:29:38 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-proxy State:{Waiting:<nil> Running:0xc421484d00 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc420e86460} Ready:true RestartCount:2 Image:artifacts.mynodes.com:8080/coreos/hyperkube:v1.5.3_coreos.0 ImageID:docker-pullable://artifacts.mynodes.com:8080/coreos/hyperkube@sha256:60fa8c3f06d0a47bb1be8c20ec6c147e973326ee2f0f37b98aaea7e46d9055df ContainerID:docker://f297570afa94790fcaf970dd4b847020e5f487e465bb4b221db833187ea68850}]}
Mar 02 03:26:29 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:29.667928 1624 volume_manager.go:338] Waiting for volumes to attach and mount for pod "kube-proxy-k8s-02m.mynodes.com_kube-system(465d9f3027710c37c6a92a2ac6a0cd59)"
Mar 02 03:26:29 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:29.968207 1624 volume_manager.go:367] All volumes are attached and mounted for pod "kube-proxy-k8s-02m.mynodes.com_kube-system(465d9f3027710c37c6a92a2ac6a0cd59)"
Mar 02 03:26:29 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:29.968268 1624 docker_manager.go:1961] Found pod infra container for "kube-proxy-k8s-02m.mynodes.com_kube-system(465d9f3027710c37c6a92a2ac6a0cd59)"
Mar 02 03:26:29 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:29.972315 1624 docker_manager.go:1974] Pod infra container looks good, keep it "kube-proxy-k8s-02m.mynodes.com_kube-system(465d9f3027710c37c6a92a2ac6a0cd59)"
Mar 02 03:26:29 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:29.972424 1624 docker_manager.go:2022] pod "kube-proxy-k8s-02m.mynodes.com_kube-system(465d9f3027710c37c6a92a2ac6a0cd59)" container "kube-proxy" exists as f297570afa94790fcaf970dd4b847020e5f487e465bb4b221db833187ea68850
Mar 02 03:26:29 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:29.972615 1624 docker_manager.go:2109] Got container changes for pod "kube-proxy-k8s-02m.mynodes.com_kube-system(465d9f3027710c37c6a92a2ac6a0cd59)": {StartInfraContainer:false InfraChanged:false InfraContainerId:5fbabffecdba963826fea86c2c66da03040a80dea7678022429ebd6be9862771 InitFailed:false InitContainersToKeep:map[] ContainersToStart:map[] ContainersToKeep:map[5fbabffecdba963826fea86c2c66da03040a80dea7678022429ebd6be9862771:-1 f297570afa94790fcaf970dd4b847020e5f487e465bb4b221db833187ea68850:0]}
Mar 02 03:26:31 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:31.526339 1624 conversion.go:134] failed to handle multiple devices for container. Skipping Filesystem stats
Mar 02 03:26:31 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:31.526398 1624 conversion.go:134] failed to handle multiple devices for container. Skipping Filesystem stats
Mar 02 03:26:31 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:31.558759 1624 summary.go:383] Missing default interface "eth0" for node:k8s-02m.mynodes.com
Mar 02 03:26:31 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:31.558921 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_kube-scheduler-k8s-02m.mynodes.com
Mar 02 03:26:31 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:31.558954 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_calico-node-rg69b
Mar 02 03:26:31 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:31.558968 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_kube-apiserver-k8s-02m.mynodes.com
Mar 02 03:26:31 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:31.558986 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_kube-proxy-k8s-02m.mynodes.com
Mar 02 03:26:31 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:31.559024 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_traefik-ingress-controller-1ws94
Mar 02 03:26:31 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:31.559040 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_kube-controller-manager-k8s-02m.mynodes.com
Mar 02 03:26:31 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:31.559112 1624 eviction_manager.go:272] eviction manager: no resources are starved
Mar 02 03:26:31 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:31.667014 1624 kubelet.go:1858] SyncLoop (housekeeping)
Mar 02 03:26:31 k8s-02m.mynodes.com kubelet-wrapper[1624]: E0302 03:26:31.682530 1624 kubelet.go:1522] Unable to mount volumes for pod "busybox-fast-pvc_default(bff412e3-fef7-11e6-9da9-0050569c0abc)": timeout expired waiting for volumes to attach/mount for pod "default"/"busybox-fast-pvc". list of unattached/unmounted volumes=[vmdk-vol]; skipping pod
Mar 02 03:26:31 k8s-02m.mynodes.com kubelet-wrapper[1624]: E0302 03:26:31.682664 1624 pod_workers.go:184] Error syncing pod bff412e3-fef7-11e6-9da9-0050569c0abc, skipping: timeout expired waiting for volumes to attach/mount for pod "default"/"busybox-fast-pvc". list of unattached/unmounted volumes=[vmdk-vol]
Mar 02 03:26:31 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:31.682841 1624 server.go:664] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"busybox-fast-pvc",UID:"bff412e3-fef7-11e6-9da9-0050569c0abc", APIVersion:"v1", ResourceVersion:"1981728", FieldPath:""}): type: 'Warning' reason: 'FailedMount' Unable to mount volumes for pod "busybox-fast-pvc_default(bff412e3-fef7-11e6-9da9-0050569c0abc)": timeout expired waiting for volumes to attach/mount for pod "default"/"busybox-fast-pvc". list of unattached/unmounted volumes=[vmdk-vol]
Mar 02 03:26:31 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:31.682925 1624 server.go:664] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"busybox-fast-pvc",UID:"bff412e3-fef7-11e6-9da9-0050569c0abc", APIVersion:"v1", ResourceVersion:"1981728", FieldPath:""}): type: 'Warning' reason: 'FailedSync' Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "default"/"busybox-fast-pvc". list of unattached/unmounted volumes=[vmdk-vol]
Mar 02 03:26:33 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:33.666956 1624 kubelet.go:1858] SyncLoop (housekeeping)
Mar 02 03:26:35 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:35.294308 1624 server.go:220] Checking API server for new Kubelet configuration.
Mar 02 03:26:35 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:35.297592 1624 server.go:229] Did not find a configuration for this Kubelet via API server: cloud provider was nil, and attempt to use hostname to find config resulted in: configmaps "kubelet-k8s-02m.mynodes.com" not found
Mar 02 03:26:35 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:35.666970 1624 kubelet.go:1858] SyncLoop (housekeeping)
Mar 02 03:26:37 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:37.666951 1624 kubelet.go:1858] SyncLoop (housekeeping)
Mar 02 03:26:38 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:38.668483 1624 prober.go:159] HTTP-Probe Host: http://127.0.0.1, Port: 8080, Path: /healthz
Mar 02 03:26:38 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:38.668567 1624 prober.go:162] HTTP-Probe Headers: map[]
Mar 02 03:26:38 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:38.668686 1624 prober.go:159] HTTP-Probe Host: http://127.0.0.1, Port: 10252, Path: /healthz
Mar 02 03:26:38 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:38.668714 1624 prober.go:162] HTTP-Probe Headers: map[]
Mar 02 03:26:38 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:38.669534 1624 http.go:82] Probe succeeded for http://127.0.0.1:10252/healthz, Response: {200 OK 200 HTTP/1.1 1 1map[Date:[Thu, 02 Mar 2017 03:26:38 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc4221b1880 2 [] true false map[] 0xc4211b81e0 <nil>}
Mar 02 03:26:38 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:38.669583 1624 prober.go:113] Liveness probe for "kube-controller-manager-k8s-02m.mynodes.com_kube-system(426568176546749e80563ce32ceaf77c):kube-controller-manager" succeeded
Mar 02 03:26:38 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:38.669663 1624 prober.go:159] HTTP-Probe Host: http://127.0.0.1, Port: 10251, Path: /healthz
Mar 02 03:26:38 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:38.669707 1624 prober.go:162] HTTP-Probe Headers: map[]
Mar 02 03:26:38 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:38.670604 1624 http.go:82] Probe succeeded for http://127.0.0.1:10251/healthz, Response: {200 OK 200 HTTP/1.1 1 1map[Content-Length:[2] Content-Type:[text/plain; charset=utf-8] Date:[Thu, 02 Mar 2017 03:26:38 GMT]] 0xc4221b19c0 2 [] true false map[] 0xc42123e690 <nil>}
Mar 02 03:26:38 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:38.670645 1624 prober.go:113] Liveness probe for "kube-scheduler-k8s-02m.mynodes.com_kube-system(35b2d362a7c0fc07449c0f9ca6ec01f6):kube-scheduler" succeeded
Mar 02 03:26:38 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:38.671118 1624 http.go:82] Probe succeeded for http://127.0.0.1:8080/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Thu, 02 Mar 2017 03:26:38 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc4221b1a80 2 [] true false map[] 0xc420fde690 <nil>}
Mar 02 03:26:38 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:38.671155 1624 prober.go:113] Liveness probe for "kube-apiserver-k8s-02m.mynodes.com_kube-system(124b9d46373e8402da7f79f958c305b8):kube-apiserver" succeeded
Mar 02 03:26:39 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:39.666966 1624 kubelet.go:1858] SyncLoop (housekeeping)
Mar 02 03:26:41 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:41.564310 1624 conversion.go:134] failed to handle multiple devices for container. Skipping Filesystem stats
Mar 02 03:26:41 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:41.564373 1624 conversion.go:134] failed to handle multiple devices for container. Skipping Filesystem stats
Mar 02 03:26:41 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:41.638137 1624 summary.go:383] Missing default interface "eth0" for node:k8s-02m.mynodes.com
Mar 02 03:26:41 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:41.638192 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_calico-node-rg69b
Mar 02 03:26:41 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:41.638208 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_kube-proxy-k8s-02m.mynodes.com
Mar 02 03:26:41 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:41.638226 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_kube-apiserver-k8s-02m.mynodes.com
Mar 02 03:26:41 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:41.638242 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_kube-scheduler-k8s-02m.mynodes.com
Mar 02 03:26:41 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:41.638262 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_kube-controller-manager-k8s-02m.mynodes.com
Mar 02 03:26:41 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:41.638274 1624 summary.go:383] Missing default interface "eth0" for pod:kube-system_traefik-ingress-controller-1ws94
Mar 02 03:26:41 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:41.638331 1624 eviction_manager.go:272] eviction manager: no resources are starved
Mar 02 03:26:41 k8s-02m.mynodes.com kubelet-wrapper[1624]: I0302 03:26:41.667041 1624 kubelet.go:1858] SyncLoop (housekeeping)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment