Skip to content

Instantly share code, notes, and snippets.

@enisoc
Created December 2, 2016 20:34
Show Gist options
  • Save enisoc/970727bef695053dc5c583eb2d187642 to your computer and use it in GitHub Desktop.
Save enisoc/970727bef695053dc5c583eb2d187642 to your computer and use it in GitHub Desktop.
kubernetes-minion-group-c4x6 kubelet.log
I1202 19:53:55.552706 2806 plugins.go:71] No cloud provider specified.
I1202 19:53:55.554357 2806 manager.go:133] cAdvisor running in container: "/"
W1202 19:53:55.592777 2806 manager.go:141] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
I1202 19:53:55.593101 2806 fs.go:116] Filesystem partitions: map[/dev/sda1:{mountpoint:/var/lib/docker/aufs major:8 minor:1 fsType:ext4 blockSize:0}]
I1202 19:53:55.599319 2806 machine.go:50] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
I1202 19:53:55.599370 2806 manager.go:182] Machine: {NumCores:4 CpuFrequency:2300000 MemoryCapacity:15807909888 MachineID: SystemUUID:E7B27D0E-145A-CBF2-A547-28C27B6198ED BootID:7bebecfe-7083-4c14-9598-ceddbed2cfda Filesystems:[{Device:/dev/sda1 Capacity:105553100800 Type:vfs Inodes:6553600}] DiskMap:map[8:0:{Name:sda Major:8 Minor:0 Size:107374182400 Scheduler:cfq} 8:16:{Name:sdb Major:8 Minor:16 Size:1073741824 Scheduler:cfq}] NetworkDevices:[{Name:eth0 MacAddress:42:01:0a:80:00:06 Speed:0 Mtu:1460}] Topology:[{Id:0 Memory:15807909888 Cores:[{Id:0 Threads:[0 1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:1 Threads:[2 3] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]}] Caches:[{Size:47185920 Type:Unified Level:3}]}] CloudProvider:GCE InstanceType:n1-standard-4 InstanceID:1440798398623805404}
I1202 19:53:55.599847 2806 manager.go:188] Version: {KernelVersion:3.16.0-4-amd64 ContainerOsVersion:Debian GNU/Linux 7 (wheezy) DockerVersion:1.11.2 CadvisorVersion: CadvisorRevision:}
W1202 19:53:55.606681 2806 server.go:632] No api server defined - no events will be sent to API server.
I1202 19:53:55.606716 2806 server.go:694] Adding manifest file: /etc/kubernetes/manifests
I1202 19:53:55.607725 2806 server.go:700] Adding manifest url "http://metadata.google.internal/computeMetadata/v1/instance/attributes/google-container-manifest" with HTTP header map[Metadata-Flavor:[Google]]
W1202 19:53:55.611332 2806 kubelet.go:527] Hairpin mode set to "promiscuous-bridge" but configureCBR0 is false, falling back to "hairpin-veth"
I1202 19:53:55.611355 2806 kubelet.go:371] Hairpin mode set to "hairpin-veth"
W1202 19:53:55.611834 2806 http.go:64] Failed to read pods from URL: http://metadata.google.internal/computeMetadata/v1/instance/attributes/google-container-manifest: 404 Not Found
I1202 19:53:55.634309 2806 manager.go:228] Setting dockerRoot to /var/lib/docker
I1202 19:53:55.646306 2806 server.go:666] Started kubelet v1.3.0-alpha.3.951+835a2577f8d0e4
W1202 19:53:55.646385 2806 kubelet.go:942] No api server defined - no node status update will be sent.
I1202 19:53:55.646389 2806 server.go:117] Starting to listen on 0.0.0.0:10250
I1202 19:53:55.647127 2806 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
I1202 19:53:55.647146 2806 manager.go:119] Kubernetes client is nil, not starting status manager.
I1202 19:53:55.647159 2806 kubelet.go:2462] Starting kubelet main sync loop.
I1202 19:53:55.647173 2806 kubelet.go:2471] skipping pod synchronization - [network state unknown container runtime is down]
E1202 19:53:55.655958 2806 kubelet.go:885] Image garbage collection failed: unable to find data for container /
I1202 19:53:55.665857 2806 factory.go:208] Registering Docker factory
E1202 19:53:55.665906 2806 manager.go:229] Registration of the rkt container factory failed: unable to communicate with Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
I1202 19:53:55.665920 2806 factory.go:53] Registering systemd factory
I1202 19:53:55.666493 2806 factory.go:85] Registering Raw factory
I1202 19:53:55.668500 2806 manager.go:1024] Started watching for new ooms in manager
I1202 19:53:55.668524 2806 oomparser.go:198] OOM parser using kernel log file: "/var/log/kern.log"
I1202 19:53:55.668956 2806 manager.go:277] Starting recovery of all containers
I1202 19:53:55.669333 2806 manager.go:282] Recovery completed
W1202 19:54:15.613065 2806 http.go:64] Failed to read pods from URL: http://metadata.google.internal/computeMetadata/v1/instance/attributes/google-container-manifest: 404 Not Found
W1202 19:54:35.614284 2806 http.go:64] Failed to read pods from URL: http://metadata.google.internal/computeMetadata/v1/instance/attributes/google-container-manifest: 404 Not Found
Flag --api-servers has been deprecated, Use --kubeconfig instead. Will be removed in a future version.
Flag --config has been deprecated, Use --pod-manifest-path instead. Will be removed in a future version.
Flag --babysit-daemons has been deprecated, Will be removed in a future version.
I1202 19:54:45.912693 3524 feature_gate.go:181] feature gates: map[]
I1202 19:54:45.914571 3524 gce.go:331] Using existing Token Source &oauth2.reuseTokenSource{new:google.computeSource{account:""}, mu:sync.Mutex{state:0, sema:0x0}, t:(*oauth2.Token)(nil)}
I1202 19:54:45.946912 3524 server.go:370] Successfully initialized cloud provider: "gce" from the config file: ""
I1202 19:54:45.952117 3524 docker.go:356] Connecting to docker on unix:///var/run/docker.sock
I1202 19:54:45.952179 3524 docker.go:376] Start docker client with request timeout=2m0s
E1202 19:54:45.953066 3524 cni.go:163] error updating cni config: No networks found in /etc/cni/net.d
I1202 19:54:45.961763 3524 iptables.go:176] Could not connect to D-Bus system bus: dial unix /var/run/dbus/system_bus_socket: connect: no such file or directory
I1202 19:54:45.981530 3524 iptables.go:176] Could not connect to D-Bus system bus: dial unix /var/run/dbus/system_bus_socket: connect: no such file or directory
I1202 19:54:45.981604 3524 server.go:512] cloud provider determined current node name to be kubernetes-minion-group-c4x6
I1202 19:54:45.981713 3524 manager.go:143] cAdvisor running in container: "/"
W1202 19:54:46.039331 3524 manager.go:151] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
I1202 19:54:46.046985 3524 fs.go:117] Filesystem partitions: map[/dev/sda1:{mountpoint:/var/lib/docker/aufs major:8 minor:1 fsType:ext4 blockSize:0}]
I1202 19:54:46.049326 3524 info.go:47] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
I1202 19:54:46.049372 3524 manager.go:198] Machine: {NumCores:4 CpuFrequency:2300000 MemoryCapacity:15807909888 MachineID: SystemUUID:E7B27D0E-145A-CBF2-A547-28C27B6198ED BootID:7bebecfe-7083-4c14-9598-ceddbed2cfda Filesystems:[{Device:/dev/sda1 Capacity:105553100800 Type:vfs Inodes:6553600 HasInodes:true}] DiskMap:map[8:0:{Name:sda Major:8 Minor:0 Size:107374182400 Scheduler:cfq} 8:16:{Name:sdb Major:8 Minor:16 Size:1073741824 Scheduler:cfq}] NetworkDevices:[{Name:eth0 MacAddress:42:01:0a:80:00:06 Speed:0 Mtu:1460}] Topology:[{Id:0 Memory:15807909888 Cores:[{Id:0 Threads:[0 1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:1 Threads:[2 3] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]}] Caches:[{Size:47185920 Type:Unified Level:3}]}] CloudProvider:GCE InstanceType:n1-standard-4 InstanceID:1440798398623805404}
I1202 19:54:46.053076 3524 manager.go:204] Version: {KernelVersion:3.16.0-4-amd64 ContainerOsVersion:Debian GNU/Linux 7 (wheezy) DockerVersion:1.11.2 CadvisorVersion: CadvisorRevision:}
I1202 19:54:46.055670 3524 server.go:512] cloud provider determined current node name to be kubernetes-minion-group-c4x6
I1202 19:54:46.055790 3524 server.go:706] Using root directory: /var/lib/kubelet
I1202 19:54:46.055855 3524 kubelet.go:308] cloud provider determined current node name to be kubernetes-minion-group-c4x6
I1202 19:54:46.055875 3524 kubelet.go:243] Adding manifest file: /etc/kubernetes/manifests
I1202 19:54:46.055917 3524 file.go:48] Watching path "/etc/kubernetes/manifests"
I1202 19:54:46.055935 3524 kubelet.go:253] Watching apiserver
I1202 19:54:46.059624 3524 iptables.go:176] Could not connect to D-Bus system bus: dial unix /var/run/dbus/system_bus_socket: connect: no such file or directory
I1202 19:54:46.059654 3524 kubelet.go:477] Hairpin mode set to "promiscuous-bridge"
I1202 19:54:46.077422 3524 plugins.go:181] Loaded network plugin "kubenet"
I1202 19:54:46.079493 3524 docker_manager.go:259] Setting dockerRoot to /var/lib/docker
I1202 19:54:46.079508 3524 docker_manager.go:262] Setting cgroupDriver to cgroupfs
I1202 19:54:46.082255 3524 plugins.go:344] Loaded volume plugin "kubernetes.io/aws-ebs"
I1202 19:54:46.082278 3524 plugins.go:344] Loaded volume plugin "kubernetes.io/empty-dir"
I1202 19:54:46.082289 3524 plugins.go:344] Loaded volume plugin "kubernetes.io/gce-pd"
I1202 19:54:46.082299 3524 plugins.go:344] Loaded volume plugin "kubernetes.io/git-repo"
I1202 19:54:46.082310 3524 plugins.go:344] Loaded volume plugin "kubernetes.io/host-path"
I1202 19:54:46.082320 3524 plugins.go:344] Loaded volume plugin "kubernetes.io/nfs"
I1202 19:54:46.082331 3524 plugins.go:344] Loaded volume plugin "kubernetes.io/secret"
I1202 19:54:46.082355 3524 plugins.go:344] Loaded volume plugin "kubernetes.io/iscsi"
I1202 19:54:46.082369 3524 plugins.go:344] Loaded volume plugin "kubernetes.io/glusterfs"
I1202 19:54:46.082379 3524 plugins.go:344] Loaded volume plugin "kubernetes.io/rbd"
I1202 19:54:46.082390 3524 plugins.go:344] Loaded volume plugin "kubernetes.io/cinder"
I1202 19:54:46.082400 3524 plugins.go:344] Loaded volume plugin "kubernetes.io/quobyte"
I1202 19:54:46.082411 3524 plugins.go:344] Loaded volume plugin "kubernetes.io/cephfs"
I1202 19:54:46.082433 3524 plugins.go:344] Loaded volume plugin "kubernetes.io/downward-api"
I1202 19:54:46.082445 3524 plugins.go:344] Loaded volume plugin "kubernetes.io/fc"
I1202 19:54:46.082455 3524 plugins.go:344] Loaded volume plugin "kubernetes.io/flocker"
I1202 19:54:46.082466 3524 plugins.go:344] Loaded volume plugin "kubernetes.io/azure-file"
I1202 19:54:46.082479 3524 plugins.go:344] Loaded volume plugin "kubernetes.io/configmap"
I1202 19:54:46.082491 3524 plugins.go:344] Loaded volume plugin "kubernetes.io/vsphere-volume"
I1202 19:54:46.082502 3524 plugins.go:344] Loaded volume plugin "kubernetes.io/azure-disk"
I1202 19:54:46.082512 3524 plugins.go:344] Loaded volume plugin "kubernetes.io/photon-pd"
I1202 19:54:46.083119 3524 server.go:741] Setting keys quota in /proc/sys/kernel/keys/root_maxkeys to 1000000
I1202 19:54:46.083154 3524 server.go:757] Setting keys bytes in /proc/sys/kernel/keys/root_maxbytes to 25000000
I1202 19:54:46.083177 3524 server.go:776] Started kubelet v1.6.0-alpha.0.1228+2212c421f6e10e
E1202 19:54:46.083752 3524 kubelet.go:1145] Image garbage collection failed: unable to find data for container /
I1202 19:54:46.083975 3524 kubelet_node_status.go:204] Setting node annotation to enable volume controller attach/detach
I1202 19:54:46.084105 3524 server.go:124] Starting to listen on 0.0.0.0:10250
I1202 19:54:46.088275 3524 server.go:141] Starting to listen read-only on 0.0.0.0:10255
I1202 19:54:46.092291 3524 kubelet_node_status.go:246] Adding node label from cloud provider: beta.kubernetes.io/instance-type=n1-standard-4
I1202 19:54:46.092310 3524 kubelet_node_status.go:257] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-central1-b
I1202 19:54:46.092320 3524 kubelet_node_status.go:261] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-central1
E1202 19:54:46.101239 3524 kubelet.go:1634] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
E1202 19:54:46.101262 3524 kubelet.go:1642] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /
I1202 19:54:46.101270 3524 kubelet_node_status.go:358] Recording NodeHasSufficientDisk event message for node kubernetes-minion-group-c4x6
I1202 19:54:46.101297 3524 kubelet_node_status.go:358] Recording NodeHasSufficientMemory event message for node kubernetes-minion-group-c4x6
I1202 19:54:46.101311 3524 kubelet_node_status.go:358] Recording NodeHasNoDiskPressure event message for node kubernetes-minion-group-c4x6
I1202 19:54:46.102146 3524 container_manager_linux.go:405] Configure resource-only container /docker-daemon with memory limit: 11065536921
I1202 19:54:46.102175 3524 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
I1202 19:54:46.102197 3524 status_manager.go:131] Starting to sync pod status with apiserver
I1202 19:54:46.102209 3524 kubelet.go:1714] Starting kubelet main sync loop.
I1202 19:54:46.102221 3524 kubelet.go:1725] skipping pod synchronization - [container runtime is down]
I1202 19:54:46.108319 3524 container_manager_linux.go:769] Found 104 PIDs in root, 70 of them are not to be moved
I1202 19:54:46.108334 3524 container_manager_linux.go:776] Moving non-kernel processes: [409 518 519 1748 1829 1860 1874 1956 1957 2044 2053 2155 2235 2242 2312 2375 2435 2489 2661 2694 2709 2761 2824 2836 2861 2865 2871 2872 3091 3457 3459 3462 3479 3524]
I1202 19:54:46.119880 3524 volume_manager.go:240] The desired_state_of_world populator starts
I1202 19:54:46.119901 3524 volume_manager.go:242] Starting Kubelet Volume Manager
I1202 19:54:46.138970 3524 factory.go:295] Registering Docker factory
W1202 19:54:46.139056 3524 manager.go:247] Registration of the rkt container factory failed: unable to communicate with Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
I1202 19:54:46.139090 3524 factory.go:54] Registering systemd factory
I1202 19:54:46.139289 3524 factory.go:86] Registering Raw factory
I1202 19:54:46.139482 3524 manager.go:1106] Started watching for new ooms in manager
I1202 19:54:46.139558 3524 oomparser.go:200] OOM parser using kernel log file: "/var/log/kern.log"
I1202 19:54:46.140268 3524 manager.go:288] Starting recovery of all containers
I1202 19:54:46.143865 3524 manager.go:293] Recovery completed
E1202 19:54:46.158947 3524 eviction_manager.go:202] eviction manager: unexpected err: failed GetNode: node 'kubernetes-minion-group-c4x6' not found
I1202 19:54:46.221068 3524 kubelet_node_status.go:204] Setting node annotation to enable volume controller attach/detach
I1202 19:54:46.224876 3524 kubelet_node_status.go:246] Adding node label from cloud provider: beta.kubernetes.io/instance-type=n1-standard-4
I1202 19:54:46.224918 3524 kubelet_node_status.go:257] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-central1-b
I1202 19:54:46.224929 3524 kubelet_node_status.go:261] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-central1
I1202 19:54:46.228522 3524 kubelet_node_status.go:358] Recording NodeHasSufficientDisk event message for node kubernetes-minion-group-c4x6
I1202 19:54:46.228609 3524 kubelet_node_status.go:358] Recording NodeHasSufficientMemory event message for node kubernetes-minion-group-c4x6
I1202 19:54:46.228650 3524 kubelet_node_status.go:358] Recording NodeHasNoDiskPressure event message for node kubernetes-minion-group-c4x6
I1202 19:54:46.228773 3524 kubelet_node_status.go:74] Attempting to register node kubernetes-minion-group-c4x6
I1202 19:54:46.233924 3524 container_manager_linux.go:769] Found 70 PIDs in root, 70 of them are not to be moved
I1202 19:54:46.240563 3524 kubelet_node_status.go:77] Successfully registered node kubernetes-minion-group-c4x6
E1202 19:54:46.268198 3524 kubelet_node_status.go:302] Error updating node status, will retry: Operation cannot be fulfilled on nodes "kubernetes-minion-group-c4x6": the object has been modified; please apply your changes to the latest version and try again
E1202 19:54:46.292455 3524 kubelet_node_status.go:302] Error updating node status, will retry: Operation cannot be fulfilled on nodes "kubernetes-minion-group-c4x6": the object has been modified; please apply your changes to the latest version and try again
I1202 19:54:46.298106 3524 kubenet_linux.go:262] CNI network config set to {
"cniVersion": "0.1.0",
"name": "kubenet",
"type": "bridge",
"bridge": "cbr0",
"mtu": 1460,
"addIf": "eth0",
"isGateway": true,
"ipMasq": false,
"hairpinMode": false,
"ipam": {
"type": "host-local",
"subnet": "10.244.4.0/24",
"gateway": "10.244.4.1",
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
}
I1202 19:54:46.298264 3524 kubelet_network.go:211] Setting Pod CIDR: -> 10.244.4.0/24
I1202 19:54:51.102459 3524 kubelet.go:1781] SyncLoop (ADD, "file"): "fluentd-cloud-logging-kubernetes-minion-group-c4x6_kube-system(1ece262b44e6d33656e56a138518be7b), kube-proxy-kubernetes-minion-group-c4x6_kube-system(2432565ca3c5351a67f0203bb8f07fa3)"
I1202 19:54:51.102597 3524 kubelet.go:1781] SyncLoop (ADD, "api"): ""
I1202 19:54:51.102611 3524 kubelet.go:1781] SyncLoop (ADD, "api"): "node-problem-detector-v0.1-53cqf_kube-system(2ce8d57e-b8c9-11e6-aa17-42010a800002)"
E1202 19:54:51.102844 3524 pod_workers.go:184] Error syncing pod 1ece262b44e6d33656e56a138518be7b, skipping: network is not ready: [Kubenet does not have netConfig. This is most likely due to lack of PodCIDR]
I1202 19:54:51.109731 3524 kubelet.go:1781] SyncLoop (ADD, "api"): "kube-proxy-kubernetes-minion-group-c4x6_kube-system(2fccf6a8-b8c9-11e6-aa17-42010a800002)"
I1202 19:54:51.232248 3524 reconciler.go:230] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/host-path/2432565ca3c5351a67f0203bb8f07fa3-kubeconfig" (spec.Name: "kubeconfig") pod "2432565ca3c5351a67f0203bb8f07fa3" (UID: "2432565ca3c5351a67f0203bb8f07fa3")
I1202 19:54:51.232322 3524 reconciler.go:230] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/host-path/2432565ca3c5351a67f0203bb8f07fa3-varlog" (spec.Name: "varlog") pod "2432565ca3c5351a67f0203bb8f07fa3" (UID: "2432565ca3c5351a67f0203bb8f07fa3")
I1202 19:54:51.232380 3524 reconciler.go:230] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/host-path/2ce8d57e-b8c9-11e6-aa17-42010a800002-log" (spec.Name: "log") pod "2ce8d57e-b8c9-11e6-aa17-42010a800002" (UID: "2ce8d57e-b8c9-11e6-aa17-42010a800002")
I1202 19:54:51.232421 3524 reconciler.go:230] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/secret/2ce8d57e-b8c9-11e6-aa17-42010a800002-default-token-et92n" (spec.Name: "default-token-et92n") pod "2ce8d57e-b8c9-11e6-aa17-42010a800002" (UID: "2ce8d57e-b8c9-11e6-aa17-42010a800002")
I1202 19:54:51.232451 3524 reconciler.go:230] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/host-path/1ece262b44e6d33656e56a138518be7b-varlog" (spec.Name: "varlog") pod "1ece262b44e6d33656e56a138518be7b" (UID: "1ece262b44e6d33656e56a138518be7b")
I1202 19:54:51.232476 3524 reconciler.go:230] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/host-path/1ece262b44e6d33656e56a138518be7b-varlibdockercontainers" (spec.Name: "varlibdockercontainers") pod "1ece262b44e6d33656e56a138518be7b" (UID: "1ece262b44e6d33656e56a138518be7b")
I1202 19:54:51.232501 3524 reconciler.go:230] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/host-path/2432565ca3c5351a67f0203bb8f07fa3-ssl-certs-host" (spec.Name: "ssl-certs-host") pod "2432565ca3c5351a67f0203bb8f07fa3" (UID: "2432565ca3c5351a67f0203bb8f07fa3")
I1202 19:54:51.332784 3524 reconciler.go:306] MountVolume operation started for volume "kubernetes.io/host-path/2432565ca3c5351a67f0203bb8f07fa3-ssl-certs-host" (spec.Name: "ssl-certs-host") to pod "2432565ca3c5351a67f0203bb8f07fa3" (UID: "2432565ca3c5351a67f0203bb8f07fa3").
I1202 19:54:51.332850 3524 reconciler.go:306] MountVolume operation started for volume "kubernetes.io/host-path/2432565ca3c5351a67f0203bb8f07fa3-kubeconfig" (spec.Name: "kubeconfig") to pod "2432565ca3c5351a67f0203bb8f07fa3" (UID: "2432565ca3c5351a67f0203bb8f07fa3").
I1202 19:54:51.332911 3524 reconciler.go:306] MountVolume operation started for volume "kubernetes.io/host-path/2432565ca3c5351a67f0203bb8f07fa3-varlog" (spec.Name: "varlog") to pod "2432565ca3c5351a67f0203bb8f07fa3" (UID: "2432565ca3c5351a67f0203bb8f07fa3").
I1202 19:54:51.332852 3524 operation_executor.go:916] MountVolume.SetUp succeeded for volume "kubernetes.io/host-path/2432565ca3c5351a67f0203bb8f07fa3-ssl-certs-host" (spec.Name: "ssl-certs-host") pod "2432565ca3c5351a67f0203bb8f07fa3" (UID: "2432565ca3c5351a67f0203bb8f07fa3").
I1202 19:54:51.332947 3524 reconciler.go:306] MountVolume operation started for volume "kubernetes.io/host-path/2ce8d57e-b8c9-11e6-aa17-42010a800002-log" (spec.Name: "log") to pod "2ce8d57e-b8c9-11e6-aa17-42010a800002" (UID: "2ce8d57e-b8c9-11e6-aa17-42010a800002").
I1202 19:54:51.332927 3524 operation_executor.go:916] MountVolume.SetUp succeeded for volume "kubernetes.io/host-path/2432565ca3c5351a67f0203bb8f07fa3-kubeconfig" (spec.Name: "kubeconfig") pod "2432565ca3c5351a67f0203bb8f07fa3" (UID: "2432565ca3c5351a67f0203bb8f07fa3").
I1202 19:54:51.332953 3524 operation_executor.go:916] MountVolume.SetUp succeeded for volume "kubernetes.io/host-path/2432565ca3c5351a67f0203bb8f07fa3-varlog" (spec.Name: "varlog") pod "2432565ca3c5351a67f0203bb8f07fa3" (UID: "2432565ca3c5351a67f0203bb8f07fa3").
I1202 19:54:51.333010 3524 reconciler.go:306] MountVolume operation started for volume "kubernetes.io/secret/2ce8d57e-b8c9-11e6-aa17-42010a800002-default-token-et92n" (spec.Name: "default-token-et92n") to pod "2ce8d57e-b8c9-11e6-aa17-42010a800002" (UID: "2ce8d57e-b8c9-11e6-aa17-42010a800002").
I1202 19:54:51.333043 3524 reconciler.go:306] MountVolume operation started for volume "kubernetes.io/host-path/1ece262b44e6d33656e56a138518be7b-varlog" (spec.Name: "varlog") to pod "1ece262b44e6d33656e56a138518be7b" (UID: "1ece262b44e6d33656e56a138518be7b").
I1202 19:54:51.333034 3524 operation_executor.go:916] MountVolume.SetUp succeeded for volume "kubernetes.io/host-path/2ce8d57e-b8c9-11e6-aa17-42010a800002-log" (spec.Name: "log") pod "2ce8d57e-b8c9-11e6-aa17-42010a800002" (UID: "2ce8d57e-b8c9-11e6-aa17-42010a800002").
I1202 19:54:51.333073 3524 reconciler.go:306] MountVolume operation started for volume "kubernetes.io/host-path/1ece262b44e6d33656e56a138518be7b-varlibdockercontainers" (spec.Name: "varlibdockercontainers") to pod "1ece262b44e6d33656e56a138518be7b" (UID: "1ece262b44e6d33656e56a138518be7b").
I1202 19:54:51.333096 3524 operation_executor.go:916] MountVolume.SetUp succeeded for volume "kubernetes.io/host-path/1ece262b44e6d33656e56a138518be7b-varlog" (spec.Name: "varlog") pod "1ece262b44e6d33656e56a138518be7b" (UID: "1ece262b44e6d33656e56a138518be7b").
I1202 19:54:51.333131 3524 operation_executor.go:916] MountVolume.SetUp succeeded for volume "kubernetes.io/host-path/1ece262b44e6d33656e56a138518be7b-varlibdockercontainers" (spec.Name: "varlibdockercontainers") pod "1ece262b44e6d33656e56a138518be7b" (UID: "1ece262b44e6d33656e56a138518be7b").
I1202 19:54:51.338831 3524 operation_executor.go:916] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2ce8d57e-b8c9-11e6-aa17-42010a800002-default-token-et92n" (spec.Name: "default-token-et92n") pod "2ce8d57e-b8c9-11e6-aa17-42010a800002" (UID: "2ce8d57e-b8c9-11e6-aa17-42010a800002").
I1202 19:54:51.403593 3524 docker_manager.go:1977] Need to restart pod infra container for "node-problem-detector-v0.1-53cqf_kube-system(2ce8d57e-b8c9-11e6-aa17-42010a800002)" because it is not found
I1202 19:54:51.422801 3524 docker_manager.go:1977] Need to restart pod infra container for "kube-proxy-kubernetes-minion-group-c4x6_kube-system(2432565ca3c5351a67f0203bb8f07fa3)" because it is not found
I1202 19:54:51.811117 3524 provider.go:119] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider
I1202 19:54:51.811291 3524 provider.go:119] Refreshing cache for provider: *gcp_credentials.dockerConfigKeyProvider
I1202 19:54:51.811941 3524 config.go:185] body of failing http response: &{0x6e41f0 0xc420d00040 0x6e4010}
E1202 19:54:51.811987 3524 metadata.go:142] while reading 'google-dockercfg' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg
I1202 19:54:51.812010 3524 provider.go:119] Refreshing cache for provider: *gcp_credentials.dockerConfigUrlKeyProvider
I1202 19:54:51.813940 3524 config.go:185] body of failing http response: &{0x6e41f0 0xc420eaac00 0x6e4010}
E1202 19:54:51.813972 3524 metadata.go:159] while reading 'google-dockercfg-url' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg-url
I1202 19:54:52.127811 3524 kubelet.go:1816] SyncLoop (PLEG): "node-problem-detector-v0.1-53cqf_kube-system(2ce8d57e-b8c9-11e6-aa17-42010a800002)", event: &pleg.PodLifecycleEvent{ID:"2ce8d57e-b8c9-11e6-aa17-42010a800002", Type:"ContainerStarted", Data:"3cf25971ea0dda7d6ad88aad10958b02153da06a00dbd82ce4282d25bf9d4b38"}
I1202 19:54:52.132067 3524 kubelet.go:1816] SyncLoop (PLEG): "kube-proxy-kubernetes-minion-group-c4x6_kube-system(2432565ca3c5351a67f0203bb8f07fa3)", event: &pleg.PodLifecycleEvent{ID:"2432565ca3c5351a67f0203bb8f07fa3", Type:"ContainerStarted", Data:"84314b6b7dcbcd1195b6a946125ced5274c8be872dcf7a424f26e5bcddac014e"}
I1202 19:54:52.132116 3524 kubelet.go:1816] SyncLoop (PLEG): "kube-proxy-kubernetes-minion-group-c4x6_kube-system(2432565ca3c5351a67f0203bb8f07fa3)", event: &pleg.PodLifecycleEvent{ID:"2432565ca3c5351a67f0203bb8f07fa3", Type:"ContainerStarted", Data:"bb32c4d0858ea503ffeb64f4820d77565dc14bdd658d736b8c45ed8a6c3bbe3f"}
Flag --api-servers has been deprecated, Use --kubeconfig instead. Will be removed in a future version.
Flag --config has been deprecated, Use --pod-manifest-path instead. Will be removed in a future version.
Flag --babysit-daemons has been deprecated, Will be removed in a future version.
I1202 19:54:52.757393 3874 feature_gate.go:181] feature gates: map[]
I1202 19:54:52.759167 3874 gce.go:331] Using existing Token Source &oauth2.reuseTokenSource{new:google.computeSource{account:""}, mu:sync.Mutex{state:0, sema:0x0}, t:(*oauth2.Token)(nil)}
I1202 19:54:52.760265 3874 server.go:370] Successfully initialized cloud provider: "gce" from the config file: ""
I1202 19:54:52.763465 3874 docker.go:356] Connecting to docker on unix:///var/run/docker.sock
I1202 19:54:52.763478 3874 docker.go:376] Start docker client with request timeout=2m0s
E1202 19:54:52.764335 3874 cni.go:163] error updating cni config: No networks found in /etc/cni/net.d
I1202 19:54:52.765938 3874 iptables.go:176] Could not connect to D-Bus system bus: dial unix /var/run/dbus/system_bus_socket: connect: no such file or directory
I1202 19:54:52.767097 3874 iptables.go:176] Could not connect to D-Bus system bus: dial unix /var/run/dbus/system_bus_socket: connect: no such file or directory
I1202 19:54:52.767161 3874 server.go:512] cloud provider determined current node name to be kubernetes-minion-group-c4x6
I1202 19:54:52.767228 3874 manager.go:143] cAdvisor running in container: "/system"
W1202 19:54:52.770347 3874 manager.go:151] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
I1202 19:54:52.773615 3874 fs.go:117] Filesystem partitions: map[/dev/sda1:{mountpoint:/var/lib/docker/aufs major:8 minor:1 fsType:ext4 blockSize:0}]
I1202 19:54:52.775439 3874 info.go:47] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
I1202 19:54:52.775468 3874 manager.go:198] Machine: {NumCores:4 CpuFrequency:2300000 MemoryCapacity:15807909888 MachineID: SystemUUID:E7B27D0E-145A-CBF2-A547-28C27B6198ED BootID:7bebecfe-7083-4c14-9598-ceddbed2cfda Filesystems:[{Device:/dev/sda1 Capacity:105553100800 Type:vfs Inodes:6553600 HasInodes:true}] DiskMap:map[8:0:{Name:sda Major:8 Minor:0 Size:107374182400 Scheduler:cfq} 8:16:{Name:sdb Major:8 Minor:16 Size:1073741824 Scheduler:cfq}] NetworkDevices:[{Name:eth0 MacAddress:42:01:0a:80:00:06 Speed:0 Mtu:1460}] Topology:[{Id:0 Memory:15807909888 Cores:[{Id:0 Threads:[0 1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:1 Threads:[2 3] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]}] Caches:[{Size:47185920 Type:Unified Level:3}]}] CloudProvider:GCE InstanceType:n1-standard-4 InstanceID:1440798398623805404}
I1202 19:54:52.775831 3874 manager.go:204] Version: {KernelVersion:3.16.0-4-amd64 ContainerOsVersion:Debian GNU/Linux 7 (wheezy) DockerVersion:1.11.2 CadvisorVersion: CadvisorRevision:}
I1202 19:54:52.777720 3874 server.go:512] cloud provider determined current node name to be kubernetes-minion-group-c4x6
I1202 19:54:52.777901 3874 server.go:706] Using root directory: /var/lib/kubelet
I1202 19:54:52.777967 3874 kubelet.go:308] cloud provider determined current node name to be kubernetes-minion-group-c4x6
I1202 19:54:52.777989 3874 kubelet.go:243] Adding manifest file: /etc/kubernetes/manifests
I1202 19:54:52.778019 3874 file.go:48] Watching path "/etc/kubernetes/manifests"
I1202 19:54:52.778032 3874 kubelet.go:253] Watching apiserver
I1202 19:54:52.780276 3874 iptables.go:176] Could not connect to D-Bus system bus: dial unix /var/run/dbus/system_bus_socket: connect: no such file or directory
I1202 19:54:52.780306 3874 kubelet.go:477] Hairpin mode set to "promiscuous-bridge"
I1202 19:54:52.783576 3874 plugins.go:181] Loaded network plugin "kubenet"
I1202 19:54:52.785621 3874 docker_manager.go:259] Setting dockerRoot to /var/lib/docker
I1202 19:54:52.785634 3874 docker_manager.go:262] Setting cgroupDriver to cgroupfs
I1202 19:54:52.787322 3874 plugins.go:344] Loaded volume plugin "kubernetes.io/aws-ebs"
I1202 19:54:52.787338 3874 plugins.go:344] Loaded volume plugin "kubernetes.io/empty-dir"
I1202 19:54:52.787345 3874 plugins.go:344] Loaded volume plugin "kubernetes.io/gce-pd"
I1202 19:54:52.787352 3874 plugins.go:344] Loaded volume plugin "kubernetes.io/git-repo"
I1202 19:54:52.787359 3874 plugins.go:344] Loaded volume plugin "kubernetes.io/host-path"
I1202 19:54:52.787366 3874 plugins.go:344] Loaded volume plugin "kubernetes.io/nfs"
I1202 19:54:52.787372 3874 plugins.go:344] Loaded volume plugin "kubernetes.io/secret"
I1202 19:54:52.787380 3874 plugins.go:344] Loaded volume plugin "kubernetes.io/iscsi"
I1202 19:54:52.787389 3874 plugins.go:344] Loaded volume plugin "kubernetes.io/glusterfs"
I1202 19:54:52.787396 3874 plugins.go:344] Loaded volume plugin "kubernetes.io/rbd"
I1202 19:54:52.787403 3874 plugins.go:344] Loaded volume plugin "kubernetes.io/cinder"
I1202 19:54:52.787409 3874 plugins.go:344] Loaded volume plugin "kubernetes.io/quobyte"
I1202 19:54:52.787416 3874 plugins.go:344] Loaded volume plugin "kubernetes.io/cephfs"
I1202 19:54:52.787435 3874 plugins.go:344] Loaded volume plugin "kubernetes.io/downward-api"
I1202 19:54:52.787444 3874 plugins.go:344] Loaded volume plugin "kubernetes.io/fc"
I1202 19:54:52.787456 3874 plugins.go:344] Loaded volume plugin "kubernetes.io/flocker"
I1202 19:54:52.787463 3874 plugins.go:344] Loaded volume plugin "kubernetes.io/azure-file"
I1202 19:54:52.787469 3874 plugins.go:344] Loaded volume plugin "kubernetes.io/configmap"
I1202 19:54:52.787476 3874 plugins.go:344] Loaded volume plugin "kubernetes.io/vsphere-volume"
I1202 19:54:52.787484 3874 plugins.go:344] Loaded volume plugin "kubernetes.io/azure-disk"
I1202 19:54:52.787491 3874 plugins.go:344] Loaded volume plugin "kubernetes.io/photon-pd"
I1202 19:54:52.788211 3874 server.go:776] Started kubelet v1.6.0-alpha.0.1228+2212c421f6e10e
E1202 19:54:52.788440 3874 kubelet.go:1145] Image garbage collection failed: unable to find data for container /
I1202 19:54:52.788566 3874 server.go:124] Starting to listen on 0.0.0.0:10250
I1202 19:54:52.788631 3874 kubelet_node_status.go:204] Setting node annotation to enable volume controller attach/detach
I1202 19:54:52.788800 3874 server.go:141] Starting to listen read-only on 0.0.0.0:10255
I1202 19:54:52.796371 3874 kubelet_node_status.go:246] Adding node label from cloud provider: beta.kubernetes.io/instance-type=n1-standard-4
I1202 19:54:52.796390 3874 kubelet_node_status.go:257] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-central1-b
I1202 19:54:52.796399 3874 kubelet_node_status.go:261] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-central1
E1202 19:54:52.799726 3874 kubelet.go:1634] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
E1202 19:54:52.799749 3874 kubelet.go:1642] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /
I1202 19:54:52.799757 3874 kubelet_node_status.go:358] Recording NodeHasSufficientDisk event message for node kubernetes-minion-group-c4x6
I1202 19:54:52.799808 3874 kubelet_node_status.go:358] Recording NodeHasSufficientMemory event message for node kubernetes-minion-group-c4x6
I1202 19:54:52.799823 3874 kubelet_node_status.go:358] Recording NodeHasNoDiskPressure event message for node kubernetes-minion-group-c4x6
I1202 19:54:52.800645 3874 container_manager_linux.go:405] Configure resource-only container /docker-daemon with memory limit: 11065536921
I1202 19:54:52.800698 3874 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
I1202 19:54:52.800731 3874 status_manager.go:131] Starting to sync pod status with apiserver
I1202 19:54:52.800743 3874 kubelet.go:1714] Starting kubelet main sync loop.
I1202 19:54:52.800755 3874 kubelet.go:1725] skipping pod synchronization - [container runtime is down]
I1202 19:54:52.800783 3874 volume_manager.go:240] The desired_state_of_world populator starts
I1202 19:54:52.800792 3874 volume_manager.go:242] Starting Kubelet Volume Manager
I1202 19:54:52.802302 3874 container_manager_linux.go:769] Found 76 PIDs in root, 70 of them are not to be moved
I1202 19:54:52.802333 3874 container_manager_linux.go:776] Moving non-kernel processes: [3727 3728 3729 3730 3731 3732]
I1202 19:54:52.808322 3874 factory.go:295] Registering Docker factory
W1202 19:54:52.808361 3874 manager.go:247] Registration of the rkt container factory failed: unable to communicate with Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
I1202 19:54:52.808372 3874 factory.go:54] Registering systemd factory
I1202 19:54:52.808586 3874 factory.go:86] Registering Raw factory
I1202 19:54:52.808963 3874 manager.go:1106] Started watching for new ooms in manager
I1202 19:54:52.809018 3874 oomparser.go:200] OOM parser using kernel log file: "/var/log/kern.log"
I1202 19:54:52.809704 3874 manager.go:288] Starting recovery of all containers
I1202 19:54:52.820745 3874 container_manager_linux.go:769] Found 70 PIDs in root, 70 of them are not to be moved
I1202 19:54:52.853839 3874 manager.go:293] Recovery completed
I1202 19:54:52.868205 3874 threshold_notifier_linux.go:76] eviction: setting notification threshold to 15545765888
I1202 19:54:52.901127 3874 kubelet_node_status.go:204] Setting node annotation to enable volume controller attach/detach
I1202 19:54:52.903507 3874 kubelet_node_status.go:246] Adding node label from cloud provider: beta.kubernetes.io/instance-type=n1-standard-4
I1202 19:54:52.903525 3874 kubelet_node_status.go:257] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-central1-b
I1202 19:54:52.903530 3874 kubelet_node_status.go:261] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-central1
I1202 19:54:52.906530 3874 kubelet_node_status.go:358] Recording NodeHasSufficientDisk event message for node kubernetes-minion-group-c4x6
I1202 19:54:52.906566 3874 kubelet_node_status.go:358] Recording NodeHasSufficientMemory event message for node kubernetes-minion-group-c4x6
I1202 19:54:52.906596 3874 kubelet_node_status.go:358] Recording NodeHasNoDiskPressure event message for node kubernetes-minion-group-c4x6
I1202 19:54:52.906616 3874 kubelet_node_status.go:74] Attempting to register node kubernetes-minion-group-c4x6
I1202 19:54:52.914737 3874 kubelet_node_status.go:113] Node kubernetes-minion-group-c4x6 was previously registered
I1202 19:54:52.914751 3874 kubelet_node_status.go:77] Successfully registered node kubernetes-minion-group-c4x6
I1202 19:54:52.916248 3874 kubenet_linux.go:262] CNI network config set to {
"cniVersion": "0.1.0",
"name": "kubenet",
"type": "bridge",
"bridge": "cbr0",
"mtu": 1460,
"addIf": "eth0",
"isGateway": true,
"ipMasq": false,
"hairpinMode": false,
"ipam": {
"type": "host-local",
"subnet": "10.244.4.0/24",
"gateway": "10.244.4.1",
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
}
I1202 19:54:52.916345 3874 kubelet_network.go:211] Setting Pod CIDR: -> 10.244.4.0/24
I1202 19:54:57.801063 3874 kubelet.go:1781] SyncLoop (ADD, "file"): "fluentd-cloud-logging-kubernetes-minion-group-c4x6_kube-system(1ece262b44e6d33656e56a138518be7b), kube-proxy-kubernetes-minion-group-c4x6_kube-system(2432565ca3c5351a67f0203bb8f07fa3)"
I1202 19:54:57.801262 3874 kubelet.go:1781] SyncLoop (ADD, "api"): "kube-proxy-kubernetes-minion-group-c4x6_kube-system(2fccf6a8-b8c9-11e6-aa17-42010a800002), node-problem-detector-v0.1-53cqf_kube-system(2ce8d57e-b8c9-11e6-aa17-42010a800002)"
I1202 19:54:57.801302 3874 kubelet.go:1816] SyncLoop (PLEG): "kube-proxy-kubernetes-minion-group-c4x6_kube-system(2432565ca3c5351a67f0203bb8f07fa3)", event: &pleg.PodLifecycleEvent{ID:"2432565ca3c5351a67f0203bb8f07fa3", Type:"ContainerStarted", Data:"bb32c4d0858ea503ffeb64f4820d77565dc14bdd658d736b8c45ed8a6c3bbe3f"}
I1202 19:54:57.801333 3874 kubelet.go:1816] SyncLoop (PLEG): "node-problem-detector-v0.1-53cqf_kube-system(2ce8d57e-b8c9-11e6-aa17-42010a800002)", event: &pleg.PodLifecycleEvent{ID:"2ce8d57e-b8c9-11e6-aa17-42010a800002", Type:"ContainerStarted", Data:"3cf25971ea0dda7d6ad88aad10958b02153da06a00dbd82ce4282d25bf9d4b38"}
E1202 19:54:57.801744 3874 pod_workers.go:184] Error syncing pod 1ece262b44e6d33656e56a138518be7b, skipping: network is not ready: [Kubenet does not have netConfig. This is most likely due to lack of PodCIDR]
E1202 19:54:57.808124 3874 kubelet.go:1508] Failed creating a mirror pod for "kube-proxy-kubernetes-minion-group-c4x6_kube-system(2432565ca3c5351a67f0203bb8f07fa3)": pods "kube-proxy-kubernetes-minion-group-c4x6" already exists
I1202 19:54:57.811611 3874 reconciler.go:230] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/host-path/2ce8d57e-b8c9-11e6-aa17-42010a800002-log" (spec.Name: "log") pod "2ce8d57e-b8c9-11e6-aa17-42010a800002" (UID: "2ce8d57e-b8c9-11e6-aa17-42010a800002")
I1202 19:54:57.811649 3874 reconciler.go:230] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/secret/2ce8d57e-b8c9-11e6-aa17-42010a800002-default-token-et92n" (spec.Name: "default-token-et92n") pod "2ce8d57e-b8c9-11e6-aa17-42010a800002" (UID: "2ce8d57e-b8c9-11e6-aa17-42010a800002")
I1202 19:54:57.911999 3874 reconciler.go:230] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/host-path/2432565ca3c5351a67f0203bb8f07fa3-ssl-certs-host" (spec.Name: "ssl-certs-host") pod "2432565ca3c5351a67f0203bb8f07fa3" (UID: "2432565ca3c5351a67f0203bb8f07fa3")
I1202 19:54:57.912069 3874 reconciler.go:230] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/host-path/2432565ca3c5351a67f0203bb8f07fa3-kubeconfig" (spec.Name: "kubeconfig") pod "2432565ca3c5351a67f0203bb8f07fa3" (UID: "2432565ca3c5351a67f0203bb8f07fa3")
I1202 19:54:57.912115 3874 reconciler.go:230] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/host-path/2432565ca3c5351a67f0203bb8f07fa3-varlog" (spec.Name: "varlog") pod "2432565ca3c5351a67f0203bb8f07fa3" (UID: "2432565ca3c5351a67f0203bb8f07fa3")
I1202 19:54:57.912169 3874 reconciler.go:306] MountVolume operation started for volume "kubernetes.io/host-path/2ce8d57e-b8c9-11e6-aa17-42010a800002-log" (spec.Name: "log") to pod "2ce8d57e-b8c9-11e6-aa17-42010a800002" (UID: "2ce8d57e-b8c9-11e6-aa17-42010a800002").
I1202 19:54:57.912217 3874 reconciler.go:306] MountVolume operation started for volume "kubernetes.io/secret/2ce8d57e-b8c9-11e6-aa17-42010a800002-default-token-et92n" (spec.Name: "default-token-et92n") to pod "2ce8d57e-b8c9-11e6-aa17-42010a800002" (UID: "2ce8d57e-b8c9-11e6-aa17-42010a800002").
I1202 19:54:57.912234 3874 reconciler.go:230] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/host-path/1ece262b44e6d33656e56a138518be7b-varlog" (spec.Name: "varlog") pod "1ece262b44e6d33656e56a138518be7b" (UID: "1ece262b44e6d33656e56a138518be7b")
I1202 19:54:57.912252 3874 reconciler.go:230] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/host-path/1ece262b44e6d33656e56a138518be7b-varlibdockercontainers" (spec.Name: "varlibdockercontainers") pod "1ece262b44e6d33656e56a138518be7b" (UID: "1ece262b44e6d33656e56a138518be7b")
I1202 19:54:57.912310 3874 operation_executor.go:916] MountVolume.SetUp succeeded for volume "kubernetes.io/host-path/2ce8d57e-b8c9-11e6-aa17-42010a800002-log" (spec.Name: "log") pod "2ce8d57e-b8c9-11e6-aa17-42010a800002" (UID: "2ce8d57e-b8c9-11e6-aa17-42010a800002").
I1202 19:54:57.914963 3874 operation_executor.go:916] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2ce8d57e-b8c9-11e6-aa17-42010a800002-default-token-et92n" (spec.Name: "default-token-et92n") pod "2ce8d57e-b8c9-11e6-aa17-42010a800002" (UID: "2ce8d57e-b8c9-11e6-aa17-42010a800002").
I1202 19:54:58.012463 3874 reconciler.go:306] MountVolume operation started for volume "kubernetes.io/host-path/1ece262b44e6d33656e56a138518be7b-varlog" (spec.Name: "varlog") to pod "1ece262b44e6d33656e56a138518be7b" (UID: "1ece262b44e6d33656e56a138518be7b").
I1202 19:54:58.012515 3874 reconciler.go:306] MountVolume operation started for volume "kubernetes.io/host-path/1ece262b44e6d33656e56a138518be7b-varlibdockercontainers" (spec.Name: "varlibdockercontainers") to pod "1ece262b44e6d33656e56a138518be7b" (UID: "1ece262b44e6d33656e56a138518be7b").
I1202 19:54:58.012540 3874 reconciler.go:306] MountVolume operation started for volume "kubernetes.io/host-path/2432565ca3c5351a67f0203bb8f07fa3-ssl-certs-host" (spec.Name: "ssl-certs-host") to pod "2432565ca3c5351a67f0203bb8f07fa3" (UID: "2432565ca3c5351a67f0203bb8f07fa3").
I1202 19:54:58.012561 3874 reconciler.go:306] MountVolume operation started for volume "kubernetes.io/host-path/2432565ca3c5351a67f0203bb8f07fa3-kubeconfig" (spec.Name: "kubeconfig") to pod "2432565ca3c5351a67f0203bb8f07fa3" (UID: "2432565ca3c5351a67f0203bb8f07fa3").
I1202 19:54:58.012533 3874 operation_executor.go:916] MountVolume.SetUp succeeded for volume "kubernetes.io/host-path/1ece262b44e6d33656e56a138518be7b-varlog" (spec.Name: "varlog") pod "1ece262b44e6d33656e56a138518be7b" (UID: "1ece262b44e6d33656e56a138518be7b").
I1202 19:54:58.012582 3874 reconciler.go:306] MountVolume operation started for volume "kubernetes.io/host-path/2432565ca3c5351a67f0203bb8f07fa3-varlog" (spec.Name: "varlog") to pod "2432565ca3c5351a67f0203bb8f07fa3" (UID: "2432565ca3c5351a67f0203bb8f07fa3").
I1202 19:54:58.012556 3874 operation_executor.go:916] MountVolume.SetUp succeeded for volume "kubernetes.io/host-path/1ece262b44e6d33656e56a138518be7b-varlibdockercontainers" (spec.Name: "varlibdockercontainers") pod "1ece262b44e6d33656e56a138518be7b" (UID: "1ece262b44e6d33656e56a138518be7b").
I1202 19:54:58.012600 3874 operation_executor.go:916] MountVolume.SetUp succeeded for volume "kubernetes.io/host-path/2432565ca3c5351a67f0203bb8f07fa3-varlog" (spec.Name: "varlog") pod "2432565ca3c5351a67f0203bb8f07fa3" (UID: "2432565ca3c5351a67f0203bb8f07fa3").
I1202 19:54:58.012607 3874 operation_executor.go:916] MountVolume.SetUp succeeded for volume "kubernetes.io/host-path/2432565ca3c5351a67f0203bb8f07fa3-ssl-certs-host" (spec.Name: "ssl-certs-host") pod "2432565ca3c5351a67f0203bb8f07fa3" (UID: "2432565ca3c5351a67f0203bb8f07fa3").
I1202 19:54:58.012629 3874 operation_executor.go:916] MountVolume.SetUp succeeded for volume "kubernetes.io/host-path/2432565ca3c5351a67f0203bb8f07fa3-kubeconfig" (spec.Name: "kubeconfig") pod "2432565ca3c5351a67f0203bb8f07fa3" (UID: "2432565ca3c5351a67f0203bb8f07fa3").
I1202 19:54:58.103744 3874 provider.go:119] Refreshing cache for provider: *gcp_credentials.dockerConfigUrlKeyProvider
I1202 19:54:58.104419 3874 config.go:185] body of failing http response: &{0x6e41f0 0xc420f31c40 0x6e4010}
E1202 19:54:58.104452 3874 metadata.go:159] while reading 'google-dockercfg-url' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg-url
I1202 19:54:58.107010 3874 provider.go:119] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider
I1202 19:54:58.107073 3874 provider.go:119] Refreshing cache for provider: *gcp_credentials.dockerConfigKeyProvider
I1202 19:54:58.107439 3874 config.go:185] body of failing http response: &{0x6e41f0 0xc420f31d80 0x6e4010}
E1202 19:54:58.107466 3874 metadata.go:142] while reading 'google-dockercfg' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg
I1202 19:54:59.341218 3874 kube_docker_client.go:331] Stop pulling image "gcr.io/google_containers/node-problem-detector:v0.1": "Status: Downloaded newer image for gcr.io/google_containers/node-problem-detector:v0.1"
I1202 19:54:59.417715 3874 docker_manager.go:798] Container "node-problem-detector" of pod "node-problem-detector-v0.1-53cqf_kube-system(2ce8d57e-b8c9-11e6-aa17-42010a800002)" created with warnings: [Your kernel does not support CPU cfs period. Period discarded. Your kernel does not support CPU cfs quota. Quota discarded.]
I1202 19:54:59.874407 3874 kubelet.go:1816] SyncLoop (PLEG): "node-problem-detector-v0.1-53cqf_kube-system(2ce8d57e-b8c9-11e6-aa17-42010a800002)", event: &pleg.PodLifecycleEvent{ID:"2ce8d57e-b8c9-11e6-aa17-42010a800002", Type:"ContainerStarted", Data:"6de24f1f9b9a8448e8be0419db2ff148b5fc6d176b4971560a20c84464e74a03"}
I1202 19:54:59.917860 3874 operation_executor.go:916] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2ce8d57e-b8c9-11e6-aa17-42010a800002-default-token-et92n" (spec.Name: "default-token-et92n") pod "2ce8d57e-b8c9-11e6-aa17-42010a800002" (UID: "2ce8d57e-b8c9-11e6-aa17-42010a800002").
I1202 19:55:00.919851 3874 operation_executor.go:916] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2ce8d57e-b8c9-11e6-aa17-42010a800002-default-token-et92n" (spec.Name: "default-token-et92n") pod "2ce8d57e-b8c9-11e6-aa17-42010a800002" (UID: "2ce8d57e-b8c9-11e6-aa17-42010a800002").
E1202 19:55:10.801260 3874 pod_workers.go:184] Error syncing pod 1ece262b44e6d33656e56a138518be7b, skipping: network is not ready: [Kubenet does not have netConfig. This is most likely due to lack of PodCIDR]
I1202 19:55:22.959681 3874 kubelet_node_status.go:358] Recording NodeReady event message for node kubernetes-minion-group-c4x6
I1202 19:55:25.807246 3874 kubelet.go:1781] SyncLoop (ADD, "api"): "fluentd-cloud-logging-kubernetes-minion-group-c4x6_kube-system(447bea7f-b8c9-11e6-aa17-42010a800002)"
I1202 19:55:26.109099 3874 docker_manager.go:1977] Need to restart pod infra container for "fluentd-cloud-logging-kubernetes-minion-group-c4x6_kube-system(1ece262b44e6d33656e56a138518be7b)" because it is not found
2016/12/02 19:55:26 Error retriving last reserved ip: Failed to retrieve last reserved ip: open /var/lib/cni/networks/kubenet/last_reserved_ip: no such file or directory
E1202 19:55:26.490957 3874 kubenet_linux.go:804] Failed to flush dedup chain: Failed to flush filter chain KUBE-DEDUP: exit status 255, output: Chain 'KUBE-DEDUP' doesn't exist.
I1202 19:55:26.556585 3874 docker_manager.go:2238] Determined pod ip after infra change: "fluentd-cloud-logging-kubernetes-minion-group-c4x6_kube-system(1ece262b44e6d33656e56a138518be7b)": "10.244.4.2"
I1202 19:55:26.911521 3874 kubelet.go:1816] SyncLoop (PLEG): "fluentd-cloud-logging-kubernetes-minion-group-c4x6_kube-system(1ece262b44e6d33656e56a138518be7b)", event: &pleg.PodLifecycleEvent{ID:"1ece262b44e6d33656e56a138518be7b", Type:"ContainerStarted", Data:"ccbff1cf1c287105cae2a2449734625a9575a2b11bfe4f0b8066edf78aaccf43"}
I1202 19:55:36.763026 3874 kube_docker_client.go:328] Pulling image "gcr.io/google_containers/fluentd-gcp:1.28": "c26ade95f65d: Extracting [=====================================> ] 66.29 MB/89.33 MB"
I1202 19:55:39.302369 3874 kube_docker_client.go:331] Stop pulling image "gcr.io/google_containers/fluentd-gcp:1.28": "Status: Downloaded newer image for gcr.io/google_containers/fluentd-gcp:1.28"
I1202 19:55:39.937637 3874 kubelet.go:1816] SyncLoop (PLEG): "fluentd-cloud-logging-kubernetes-minion-group-c4x6_kube-system(1ece262b44e6d33656e56a138518be7b)", event: &pleg.PodLifecycleEvent{ID:"1ece262b44e6d33656e56a138518be7b", Type:"ContainerStarted", Data:"31df440f887d230a16e96bbc1bd4d535b3412624571a1164cb2ca66e88522679"}
I1202 19:55:52.667398 3874 server.go:741] GET /healthz: (38.846µs) 200 [[curl/7.26.0] 127.0.0.1:38227]
I1202 19:55:52.824306 3874 container_manager_linux.go:769] Found 70 PIDs in root, 70 of them are not to be moved
I1202 19:56:02.678817 3874 server.go:741] GET /healthz: (36.566µs) 200 [[curl/7.26.0] 127.0.0.1:38237]
I1202 19:56:02.851166 3874 operation_executor.go:916] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2ce8d57e-b8c9-11e6-aa17-42010a800002-default-token-et92n" (spec.Name: "default-token-et92n") pod "2ce8d57e-b8c9-11e6-aa17-42010a800002" (UID: "2ce8d57e-b8c9-11e6-aa17-42010a800002").
I1202 19:56:12.690122 3874 server.go:741] GET /healthz: (31.705µs) 200 [[curl/7.26.0] 127.0.0.1:38242]
I1202 19:56:22.702469 3874 server.go:741] GET /healthz: (32.671µs) 200 [[curl/7.26.0] 127.0.0.1:38245]
I1202 19:56:32.714766 3874 server.go:741] GET /healthz: (33.447µs) 200 [[curl/7.26.0] 127.0.0.1:38248]
I1202 19:56:42.726751 3874 server.go:741] GET /healthz: (34.745µs) 200 [[curl/7.26.0] 127.0.0.1:38251]
I1202 19:56:52.738225 3874 server.go:741] GET /healthz: (50.258µs) 200 [[curl/7.26.0] 127.0.0.1:38256]
I1202 19:56:52.825277 3874 container_manager_linux.go:769] Found 70 PIDs in root, 70 of them are not to be moved
I1202 19:57:02.753477 3874 server.go:741] GET /healthz: (37.88µs) 200 [[curl/7.26.0] 127.0.0.1:38261]
I1202 19:57:05.892303 3874 operation_executor.go:916] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2ce8d57e-b8c9-11e6-aa17-42010a800002-default-token-et92n" (spec.Name: "default-token-et92n") pod "2ce8d57e-b8c9-11e6-aa17-42010a800002" (UID: "2ce8d57e-b8c9-11e6-aa17-42010a800002").
I1202 19:57:12.766660 3874 server.go:741] GET /healthz: (32.237µs) 200 [[curl/7.26.0] 127.0.0.1:38265]
I1202 19:57:22.778913 3874 server.go:741] GET /healthz: (34.349µs) 200 [[curl/7.26.0] 127.0.0.1:38270]
I1202 19:57:32.791273 3874 server.go:741] GET /healthz: (37.205µs) 200 [[curl/7.26.0] 127.0.0.1:38273]
I1202 19:57:42.803923 3874 server.go:741] GET /healthz: (27.935µs) 200 [[curl/7.26.0] 127.0.0.1:38276]
I1202 19:57:52.816542 3874 server.go:741] GET /healthz: (29.481µs) 200 [[curl/7.26.0] 127.0.0.1:38281]
I1202 19:57:52.826096 3874 container_manager_linux.go:769] Found 70 PIDs in root, 70 of them are not to be moved
I1202 19:58:02.828535 3874 server.go:741] GET /healthz: (40.177µs) 200 [[curl/7.26.0] 127.0.0.1:38286]
I1202 19:58:05.068135 3874 server.go:741] GET /stats/summary/: (2.883426ms) 200 [[Go-http-client/1.1] 10.244.6.3:35569]
I1202 19:58:12.839729 3874 server.go:741] GET /healthz: (34.876µs) 200 [[curl/7.26.0] 127.0.0.1:38292]
I1202 19:58:13.852200 3874 operation_executor.go:916] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2ce8d57e-b8c9-11e6-aa17-42010a800002-default-token-et92n" (spec.Name: "default-token-et92n") pod "2ce8d57e-b8c9-11e6-aa17-42010a800002" (UID: "2ce8d57e-b8c9-11e6-aa17-42010a800002").
I1202 19:58:22.852137 3874 server.go:741] GET /healthz: (34.729µs) 200 [[curl/7.26.0] 127.0.0.1:38297]
I1202 19:58:32.864105 3874 server.go:741] GET /healthz: (32.773µs) 200 [[curl/7.26.0] 127.0.0.1:38300]
I1202 19:58:42.876368 3874 server.go:741] GET /healthz: (37.102µs) 200 [[curl/7.26.0] 127.0.0.1:38303]
I1202 19:58:52.826956 3874 container_manager_linux.go:769] Found 64 PIDs in root, 64 of them are not to be moved
I1202 19:58:52.887813 3874 server.go:741] GET /healthz: (39.751µs) 200 [[curl/7.26.0] 127.0.0.1:38308]
I1202 19:59:02.900201 3874 server.go:741] GET /healthz: (46.707µs) 200 [[curl/7.26.0] 127.0.0.1:38313]
I1202 19:59:05.043948 3874 server.go:741] GET /stats/summary/: (2.748239ms) 200 [[Go-http-client/1.1] 10.244.6.3:35569]
I1202 19:59:12.912684 3874 server.go:741] GET /healthz: (44.009µs) 200 [[curl/7.26.0] 127.0.0.1:38318]
I1202 19:59:22.925181 3874 server.go:741] GET /healthz: (53.428µs) 200 [[curl/7.26.0] 127.0.0.1:38322]
I1202 19:59:29.839700 3874 operation_executor.go:916] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2ce8d57e-b8c9-11e6-aa17-42010a800002-default-token-et92n" (spec.Name: "default-token-et92n") pod "2ce8d57e-b8c9-11e6-aa17-42010a800002" (UID: "2ce8d57e-b8c9-11e6-aa17-42010a800002").
I1202 19:59:32.937415 3874 server.go:741] GET /healthz: (34.148µs) 200 [[curl/7.26.0] 127.0.0.1:38325]
I1202 19:59:42.949964 3874 server.go:741] GET /healthz: (34.876µs) 200 [[curl/7.26.0] 127.0.0.1:38330]
I1202 19:59:52.788701 3874 kubelet.go:1155] Image garbage collection succeeded
I1202 19:59:52.827772 3874 container_manager_linux.go:769] Found 64 PIDs in root, 64 of them are not to be moved
I1202 19:59:52.963791 3874 server.go:741] GET /healthz: (33.076µs) 200 [[curl/7.26.0] 127.0.0.1:38335]
I1202 20:00:02.976428 3874 server.go:741] GET /healthz: (49.32µs) 200 [[curl/7.26.0] 127.0.0.1:38375]
I1202 20:00:05.028836 3874 server.go:741] GET /stats/summary/: (2.632665ms) 200 [[Go-http-client/1.1] 10.244.6.3:35569]
I1202 20:00:12.989135 3874 server.go:741] GET /healthz: (36.093µs) 200 [[curl/7.26.0] 127.0.0.1:38380]
I1202 20:00:23.001788 3874 server.go:741] GET /healthz: (34.857µs) 200 [[curl/7.26.0] 127.0.0.1:38384]
I1202 20:00:33.014513 3874 server.go:741] GET /healthz: (34.706µs) 200 [[curl/7.26.0] 127.0.0.1:38387]
I1202 20:00:36.897171 3874 operation_executor.go:916] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2ce8d57e-b8c9-11e6-aa17-42010a800002-default-token-et92n" (spec.Name: "default-token-et92n") pod "2ce8d57e-b8c9-11e6-aa17-42010a800002" (UID: "2ce8d57e-b8c9-11e6-aa17-42010a800002").
I1202 20:00:43.027365 3874 server.go:741] GET /healthz: (36.927µs) 200 [[curl/7.26.0] 127.0.0.1:38390]
I1202 20:00:52.828850 3874 container_manager_linux.go:769] Found 65 PIDs in root, 65 of them are not to be moved
I1202 20:00:53.040970 3874 server.go:741] GET /healthz: (37.126µs) 200 [[curl/7.26.0] 127.0.0.1:38397]
I1202 20:01:03.053802 3874 server.go:741] GET /healthz: (36.118µs) 200 [[curl/7.26.0] 127.0.0.1:38402]
I1202 20:01:05.047323 3874 server.go:741] GET /stats/summary/: (2.69022ms) 200 [[Go-http-client/1.1] 10.244.6.3:35569]
I1202 20:01:13.066616 3874 server.go:741] GET /healthz: (35.525µs) 200 [[curl/7.26.0] 127.0.0.1:38407]
I1202 20:01:23.078802 3874 server.go:741] GET /healthz: (33.658µs) 200 [[curl/7.26.0] 127.0.0.1:38410]
I1202 20:01:33.090908 3874 server.go:741] GET /healthz: (37.867µs) 200 [[curl/7.26.0] 127.0.0.1:38414]
I1202 20:01:43.103078 3874 server.go:741] GET /healthz: (30.6µs) 200 [[curl/7.26.0] 127.0.0.1:38417]
I1202 20:01:52.829874 3874 container_manager_linux.go:769] Found 65 PIDs in root, 65 of them are not to be moved
I1202 20:01:53.115621 3874 server.go:741] GET /healthz: (33.312µs) 200 [[curl/7.26.0] 127.0.0.1:38422]
I1202 20:02:03.127955 3874 server.go:741] GET /healthz: (33.612µs) 200 [[curl/7.26.0] 127.0.0.1:38427]
I1202 20:02:03.894693 3874 operation_executor.go:916] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2ce8d57e-b8c9-11e6-aa17-42010a800002-default-token-et92n" (spec.Name: "default-token-et92n") pod "2ce8d57e-b8c9-11e6-aa17-42010a800002" (UID: "2ce8d57e-b8c9-11e6-aa17-42010a800002").
I1202 20:02:05.032977 3874 server.go:741] GET /stats/summary/: (2.57844ms) 200 [[Go-http-client/1.1] 10.244.6.3:35569]
I1202 20:02:13.140539 3874 server.go:741] GET /healthz: (34.235µs) 200 [[curl/7.26.0] 127.0.0.1:38432]
I1202 20:02:23.153191 3874 server.go:741] GET /healthz: (33.35µs) 200 [[curl/7.26.0] 127.0.0.1:38435]
I1202 20:02:33.165471 3874 server.go:741] GET /healthz: (33.266µs) 200 [[curl/7.26.0] 127.0.0.1:38439]
I1202 20:02:43.177731 3874 server.go:741] GET /healthz: (37.105µs) 200 [[curl/7.26.0] 127.0.0.1:38442]
I1202 20:02:52.830857 3874 container_manager_linux.go:769] Found 65 PIDs in root, 65 of them are not to be moved
I1202 20:02:53.190332 3874 server.go:741] GET /healthz: (36.528µs) 200 [[curl/7.26.0] 127.0.0.1:38447]
I1202 20:03:03.202420 3874 server.go:741] GET /healthz: (37.517µs) 200 [[curl/7.26.0] 127.0.0.1:38452]
I1202 20:03:05.056008 3874 server.go:741] GET /stats/summary/: (2.623863ms) 200 [[Go-http-client/1.1] 10.244.6.3:35569]
I1202 20:03:11.847234 3874 operation_executor.go:916] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2ce8d57e-b8c9-11e6-aa17-42010a800002-default-token-et92n" (spec.Name: "default-token-et92n") pod "2ce8d57e-b8c9-11e6-aa17-42010a800002" (UID: "2ce8d57e-b8c9-11e6-aa17-42010a800002").
I1202 20:03:13.214379 3874 server.go:741] GET /healthz: (54.58µs) 200 [[curl/7.26.0] 127.0.0.1:38457]
I1202 20:03:23.226382 3874 server.go:741] GET /healthz: (32.816µs) 200 [[curl/7.26.0] 127.0.0.1:38462]
I1202 20:03:33.239170 3874 server.go:741] GET /healthz: (34.82µs) 200 [[curl/7.26.0] 127.0.0.1:38465]
I1202 20:03:43.251614 3874 server.go:741] GET /healthz: (37.773µs) 200 [[curl/7.26.0] 127.0.0.1:38469]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment