Skip to content

Instantly share code, notes, and snippets.

@maddyblue
Created September 30, 2015 06:48
Show Gist options
  • Save maddyblue/8e0730228e5c9cb1bbb7 to your computer and use it in GitHub Desktop.
Save maddyblue/8e0730228e5c9cb1bbb7 to your computer and use it in GitHub Desktop.
==> kube-apiserver.log <==
I0930 02:45:00.550361 15299 handlers.go:131] POST /api/v1/namespaces/default/events: (7.383885ms) 201 [[kubelet/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36566]
I0930 02:45:00.556931 15299 handlers.go:131] GET /api/v1/namespaces/default/pods/nginx: (1.333617ms) 200 [[kubelet/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36566]
I0930 02:45:00.560711 15299 handlers.go:131] PUT /api/v1/namespaces/default/pods/nginx/status: (2.51882ms) 200 [[kubelet/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36566]
==> kube-controller-manager.log <==
I0930 02:45:00.561793 15325 controller.go:153] No jobs found for pod nginx, job controller will avoid syncing
I0930 02:45:00.561813 15325 controller.go:250] Pod nginx updated.
I0930 02:45:00.561863 15325 controller.go:214] No daemon sets found for pod nginx, daemon set controller will avoid syncing
==> kubelet.log <==
I0930 02:45:00.562010 15331 config.go:252] Setting pods for source api
==> kube-controller-manager.log <==
I0930 02:45:00.562099 15325 replication_controller.go:206] No controllers found for pod nginx, replication manager will avoid syncing
==> kubelet.log <==
I0930 02:45:00.562265 15331 manager.go:223] Status for pod "nginx_default" updated successfully
==> kube-apiserver.log <==
I0930 02:45:01.540186 15299 handlers.go:131] GET /apis/experimental/v1alpha1/daemonsets: (274.336µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36612]
I0930 02:45:01.540197 15299 handlers.go:131] GET /apis/experimental/v1alpha1/jobs: (213.156µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36613]
I0930 02:45:02.541304 15299 handlers.go:131] GET /apis/experimental/v1alpha1/jobs: (249.053µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36613]
I0930 02:45:02.541358 15299 handlers.go:131] GET /apis/experimental/v1alpha1/daemonsets: (257.644µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36612]
I0930 02:45:03.509434 15299 handlers.go:131] GET /api/v1/resourcequotas: (1.302747ms) 200 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36612]
I0930 02:45:03.521627 15299 handlers.go:131] GET /api/v1/nodes: (1.301909ms) 200 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36612]
I0930 02:45:03.542268 15299 handlers.go:131] GET /apis/experimental/v1alpha1/jobs: (176.279µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36612]
I0930 02:45:03.542273 15299 handlers.go:131] GET /apis/experimental/v1alpha1/daemonsets: (194.717µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36613]
I0930 02:45:04.314057 15299 handlers.go:131] GET /api/v1/nodes/127.0.0.1: (1.67968ms) 200 [[kubelet/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36566]
I0930 02:45:04.320191 15299 handlers.go:131] PUT /api/v1/nodes/127.0.0.1/status: (3.581035ms) 200 [[kubelet/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36566]
I0930 02:45:04.543419 15299 handlers.go:131] GET /apis/experimental/v1alpha1/jobs: (270.355µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36613]
I0930 02:45:04.543577 15299 handlers.go:131] GET /apis/experimental/v1alpha1/daemonsets: (439.231µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36612]
==> kubelet.log <==
I0930 02:45:04.554061 15331 rkt.go:185] rkt: Run command:[version]
==> kube-apiserver.log <==
I0930 02:45:05.544638 15299 handlers.go:131] GET /apis/experimental/v1alpha1/jobs: (214.622µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36613]
I0930 02:45:05.544638 15299 handlers.go:131] GET /apis/experimental/v1alpha1/daemonsets: (212.317µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36612]
I0930 02:45:06.498567 15299 handlers.go:131] GET /api/v1/watch/persistentvolumes?resourceVersion=36: (9.998022201s) 0 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36609]
I0930 02:45:06.498568 15299 handlers.go:131] GET /api/v1/watch/persistentvolumeclaims?resourceVersion=36: (9.99824241s) 0 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36603]
I0930 02:45:06.498829 15299 handlers.go:131] GET /api/v1/watch/persistentvolumes?resourceVersion=36: (9.998249882s) 0 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36604]
I0930 02:45:06.545706 15299 handlers.go:131] GET /apis/experimental/v1alpha1/daemonsets: (235.994µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36613]
I0930 02:45:06.545713 15299 handlers.go:131] GET /apis/experimental/v1alpha1/jobs: (231.453µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36612]
==> kubelet.log <==
I0930 02:45:06.618041 15331 kubelet.go:1943] SyncLoop (periodic sync)
I0930 02:45:06.618142 15331 kubelet.go:1910] SyncLoop (housekeeping)
I0930 02:45:06.618168 15331 rkt.go:775] Rkt getting pods
I0930 02:45:06.627531 15331 rkt.go:775] Rkt getting pods
I0930 02:45:06.627777 15331 volumes.go:109] Used volume plugin "kubernetes.io/secret" for default-token-3rqya
I0930 02:45:06.627858 15331 kubelet.go:2553] Generating status for "nginx_default"
I0930 02:45:06.628038 15331 rkt.go:185] rkt: Run command:[status 198c8d13-7b1a-4bc3-9704-d0679cbc500f]
I0930 02:45:06.643330 15331 volumes.go:193] Making a volume.Cleaner for volume kubernetes.io~secret/default-token-3rqya of pod b748dbf0-673e-11e5-907f-d050994635df
I0930 02:45:06.643389 15331 volumes.go:229] Used volume plugin "kubernetes.io/secret" for b748dbf0-673e-11e5-907f-d050994635df/kubernetes.io~secret
I0930 02:45:06.727743 15331 rkt.go:775] Rkt getting pods
W0930 02:45:06.747775 15331 pod_info.go:162] rkt: Cannot get exit code for container &{198c8d13-7b1a-4bc3-9704-d0679cbc500f:nginx nginx nginx 2079597216 1443595500}
I0930 02:45:06.747923 15331 rkt.go:983] Pod "nginx_default" is not running, will start it
I0930 02:45:06.747945 15331 rkt.go:685] Rkt starts to run pod: name "nginx_default".
I0930 02:45:06.747989 15331 rkt.go:185] rkt: Run command:[image list --no-legend=true --fields=name]
I0930 02:45:06.776110 15331 rkt.go:185] rkt: Run command:[image cat-manifest nginx:latest]
I0930 02:45:06.777192 15331 server.go:690] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"nginx", UID:"b748dbf0-673e-11e5-907f-d050994635df", APIVersion:"v1", ResourceVersion:"24", FieldPath:"spec.containers{nginx}"}): reason: 'Pulled' Container image "nginx" already present on machine
==> kube-apiserver.log <==
I0930 02:45:06.781462 15299 handlers.go:131] PUT /api/v1/namespaces/default/events/nginx.1408ae6ffd5a34b9: (3.585365ms) 200 [[kubelet/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36566]
==> kubelet.log <==
I0930 02:45:06.800941 15331 rkt.go:185] rkt: Run command:[image list --no-legend=true --fields=key,name]
I0930 02:45:06.824984 15331 rkt.go:185] rkt: Run command:[prepare --quiet --pod-manifest /tmp/manifest-nginx-727588853 --stage1-image /data/go/stage1-lkvm.aci]
I0930 02:45:06.836690 15331 rkt.go:775] Rkt getting pods
I0930 02:45:06.945554 15331 rkt.go:775] Rkt getting pods
I0930 02:45:07.057618 15331 rkt.go:775] Rkt getting pods
I0930 02:45:07.166506 15331 rkt.go:775] Rkt getting pods
I0930 02:45:07.275723 15331 rkt.go:775] Rkt getting pods
I0930 02:45:07.387759 15331 rkt.go:775] Rkt getting pods
I0930 02:45:07.495613 15331 rkt.go:775] Rkt getting pods
==> kube-apiserver.log <==
I0930 02:45:07.499797 15299 handlers.go:131] GET /api/v1/persistentvolumes: (1.063331ms) 200 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36613]
I0930 02:45:07.500098 15299 handlers.go:131] GET /api/v1/persistentvolumeclaims: (1.388023ms) 200 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36612]
I0930 02:45:07.500304 15299 handlers.go:131] GET /api/v1/persistentvolumes: (986.716µs) 200 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36620]
I0930 02:45:07.546919 15299 handlers.go:131] GET /apis/experimental/v1alpha1/daemonsets: (185.777µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36624]
I0930 02:45:07.547021 15299 handlers.go:131] GET /apis/experimental/v1alpha1/jobs: (191.714µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36623]
==> kubelet.log <==
I0930 02:45:07.604017 15331 rkt.go:775] Rkt getting pods
I0930 02:45:07.717961 15331 rkt.go:775] Rkt getting pods
I0930 02:45:07.825417 15331 rkt.go:775] Rkt getting pods
I0930 02:45:07.933605 15331 rkt.go:775] Rkt getting pods
I0930 02:45:08.045218 15331 rkt.go:775] Rkt getting pods
I0930 02:45:08.153424 15331 rkt.go:775] Rkt getting pods
I0930 02:45:08.261555 15331 rkt.go:775] Rkt getting pods
I0930 02:45:08.374876 15331 rkt.go:775] Rkt getting pods
I0930 02:45:08.483005 15331 rkt.go:775] Rkt getting pods
==> kube-apiserver.log <==
I0930 02:45:08.524368 15299 handlers.go:131] GET /api/v1/nodes: (1.493902ms) 200 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36623]
==> kube-controller-manager.log <==
I0930 02:45:08.524761 15325 nodecontroller.go:497] Nodes ReadyCondition updated. Updating timestamp: {Capacity:map[cpu:{Amount:12.000 Format:DecimalSI} memory:{Amount:33668587520.000 Format:BinarySI} pods:{Amount:40.000 Format:DecimalSI}] Phase: Conditions:[{Type:Ready Status:True LastHeartbeatTime:2015-09-30 02:44:54 -0400 EDT LastTransitionTime:2015-09-30 02:44:24 -0400 EDT Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:127.0.0.1} {Type:InternalIP Address:127.0.0.1}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:da54b58fe0bc430896010ca3ab68992a SystemUUID:00020003-0004-0005-0006-000700080009 BootID:b780391f-3232-4738-8719-ee9ebc67679e KernelVersion:4.1.0-1-amd64 OsImage:Debian GNU/Linux stretch/sid ContainerRuntimeVersion:docker://1.7.1 KubeletVersion:v1.1.0-alpha.1.1462+b67028f40d08d8-dirty KubeProxyVersion:v1.1.0-alpha.1.1462+b67028f40d08d8-dirty}}
vs {Capacity:map[cpu:{Amount:12.000 Format:DecimalSI} memory:{Amount:33668587520.000 Format:BinarySI} pods:{Amount:40.000 Format:DecimalSI}] Phase: Conditions:[{Type:Ready Status:True LastHeartbeatTime:2015-09-30 02:45:04 -0400 EDT LastTransitionTime:2015-09-30 02:44:24 -0400 EDT Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:127.0.0.1} {Type:InternalIP Address:127.0.0.1}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:da54b58fe0bc430896010ca3ab68992a SystemUUID:00020003-0004-0005-0006-000700080009 BootID:b780391f-3232-4738-8719-ee9ebc67679e KernelVersion:4.1.0-1-amd64 OsImage:Debian GNU/Linux stretch/sid ContainerRuntimeVersion:docker://1.7.1 KubeletVersion:v1.1.0-alpha.1.1462+b67028f40d08d8-dirty KubeProxyVersion:v1.1.0-alpha.1.1462+b67028f40d08d8-dirty}}.
==> kube-apiserver.log <==
I0930 02:45:08.547758 15299 handlers.go:131] GET /apis/experimental/v1alpha1/daemonsets: (165.175µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36623]
I0930 02:45:08.547774 15299 handlers.go:131] GET /apis/experimental/v1alpha1/jobs: (178.793µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36624]
==> kubelet.log <==
I0930 02:45:08.594503 15331 rkt.go:775] Rkt getting pods
I0930 02:45:08.713668 15331 rkt.go:775] Rkt getting pods
==> kube-apiserver.log <==
I0930 02:45:09.548629 15299 handlers.go:131] GET /apis/experimental/v1alpha1/jobs: (183.473µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36624]
I0930 02:45:09.548630 15299 handlers.go:131] GET /apis/experimental/v1alpha1/daemonsets: (160.984µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36623]
==> kubelet.log <==
I0930 02:45:09.569267 15331 rkt.go:185] rkt: Run command:[version]
I0930 02:45:10.396321 15331 rkt.go:580] 'rkt prepare' returns "50e2d545-dee3-4186-a5da-aa5938448042"
I0930 02:45:10.396662 15331 rkt.go:626] rkt: Creating service file "k8s_b748dbf0-673e-11e5-907f-d050994635df.service" for pod "nginx_default"
I0930 02:45:10.462015 15331 server.go:690] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"nginx", UID:"b748dbf0-673e-11e5-907f-d050994635df", APIVersion:"v1", ResourceVersion:"24", FieldPath:"spec.containers{nginx}"}): reason: 'Created' Created with rkt id 50e2d545
I0930 02:45:10.464213 15331 kubelet.go:2553] Generating status for "nginx_default"
I0930 02:45:10.464409 15331 rkt.go:185] rkt: Run command:[status 50e2d545-dee3-4186-a5da-aa5938448042]
I0930 02:45:10.464458 15331 server.go:690] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"nginx", UID:"b748dbf0-673e-11e5-907f-d050994635df", APIVersion:"v1", ResourceVersion:"24", FieldPath:"spec.containers{nginx}"}): reason: 'Started' Started with rkt id 50e2d545
==> kube-apiserver.log <==
I0930 02:45:10.465034 15299 handlers.go:131] POST /api/v1/namespaces/default/events: (2.526992ms) 201 [[kubelet/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36566]
I0930 02:45:10.470973 15299 handlers.go:131] POST /api/v1/namespaces/default/events: (4.596665ms) 201 [[kubelet/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36566]
I0930 02:45:10.480315 15299 handlers.go:131] GET /api/v1/namespaces/default/pods/nginx: (1.095598ms) 200 [[kubelet/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36566]
I0930 02:45:10.484600 15299 handlers.go:131] PUT /api/v1/namespaces/default/pods/nginx/status: (3.20082ms) 200 [[kubelet/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36566]
==> kubelet.log <==
I0930 02:45:10.485650 15331 manager.go:223] Status for pod "nginx_default" updated successfully
==> kube-controller-manager.log <==
I0930 02:45:10.485769 15325 controller.go:153] No jobs found for pod nginx, job controller will avoid syncing
I0930 02:45:10.485910 15325 replication_controller.go:206] No controllers found for pod nginx, replication manager will avoid syncing
I0930 02:45:10.485932 15325 controller.go:250] Pod nginx updated.
I0930 02:45:10.485995 15325 controller.go:214] No daemon sets found for pod nginx, daemon set controller will avoid syncing
==> kubelet.log <==
I0930 02:45:10.486217 15331 config.go:252] Setting pods for source api
==> kube-apiserver.log <==
I0930 02:45:10.549636 15299 handlers.go:131] GET /apis/experimental/v1alpha1/jobs: (207.009µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36623]
I0930 02:45:10.549638 15299 handlers.go:131] GET /apis/experimental/v1alpha1/daemonsets: (190.946µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36624]
I0930 02:45:11.550658 15299 handlers.go:131] GET /apis/experimental/v1alpha1/daemonsets: (224.679µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36623]
I0930 02:45:11.550657 15299 handlers.go:131] GET /apis/experimental/v1alpha1/jobs: (230.127µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36624]
I0930 02:45:12.551716 15299 handlers.go:131] GET /apis/experimental/v1alpha1/daemonsets: (216.018µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36624]
I0930 02:45:12.551739 15299 handlers.go:131] GET /apis/experimental/v1alpha1/jobs: (216.368µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36623]
I0930 02:45:13.511501 15299 handlers.go:131] GET /api/v1/resourcequotas: (1.259934ms) 200 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36623]
I0930 02:45:13.526583 15299 handlers.go:131] GET /api/v1/nodes: (1.369306ms) 200 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36623]
I0930 02:45:13.552743 15299 handlers.go:131] GET /apis/experimental/v1alpha1/jobs: (215.6µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36623]
I0930 02:45:13.552744 15299 handlers.go:131] GET /apis/experimental/v1alpha1/daemonsets: (220.977µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36624]
I0930 02:45:14.323152 15299 handlers.go:131] GET /api/v1/nodes/127.0.0.1: (1.69707ms) 200 [[kubelet/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36566]
I0930 02:45:14.329274 15299 handlers.go:131] PUT /api/v1/nodes/127.0.0.1/status: (3.971377ms) 200 [[kubelet/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36566]
I0930 02:45:14.553835 15299 handlers.go:131] GET /apis/experimental/v1alpha1/jobs: (262.603µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36624]
I0930 02:45:14.553835 15299 handlers.go:131] GET /apis/experimental/v1alpha1/daemonsets: (203.657µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36623]
==> kubelet.log <==
I0930 02:45:14.581282 15331 rkt.go:185] rkt: Run command:[version]
==> kube-apiserver.log <==
I0930 02:45:14.698119 15299 handlers.go:131] GET /api: (169.784µs) 200 [[kubectl/v1.0.6 (linux/amd64) kubernetes/388061f] 127.0.0.1:36631]
I0930 02:45:14.700529 15299 handlers.go:131] GET /api/v1/namespaces/default/pods: (1.48566ms) 200 [[kubectl/v1.0.6 (linux/amd64) kubernetes/388061f] 127.0.0.1:36631]
I0930 02:45:15.554959 15299 handlers.go:131] GET /apis/experimental/v1alpha1/daemonsets: (251.568µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36623]
I0930 02:45:15.554973 15299 handlers.go:131] GET /apis/experimental/v1alpha1/jobs: (271.821µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36624]
I0930 02:45:16.556253 15299 handlers.go:131] GET /apis/experimental/v1alpha1/jobs: (234.667µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36623]
I0930 02:45:16.556270 15299 handlers.go:131] GET /apis/experimental/v1alpha1/daemonsets: (239.206µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36624]
==> kubelet.log <==
I0930 02:45:16.643666 15331 kubelet.go:1943] SyncLoop (periodic sync)
I0930 02:45:16.643760 15331 kubelet.go:1910] SyncLoop (housekeeping)
I0930 02:45:16.643788 15331 rkt.go:775] Rkt getting pods
I0930 02:45:16.653490 15331 rkt.go:775] Rkt getting pods
I0930 02:45:16.653754 15331 volumes.go:109] Used volume plugin "kubernetes.io/secret" for default-token-3rqya
I0930 02:45:16.653846 15331 kubelet.go:2553] Generating status for "nginx_default"
I0930 02:45:16.654042 15331 rkt.go:185] rkt: Run command:[status 50e2d545-dee3-4186-a5da-aa5938448042]
I0930 02:45:16.666904 15331 volumes.go:193] Making a volume.Cleaner for volume kubernetes.io~secret/default-token-3rqya of pod b748dbf0-673e-11e5-907f-d050994635df
I0930 02:45:16.666980 15331 volumes.go:229] Used volume plugin "kubernetes.io/secret" for b748dbf0-673e-11e5-907f-d050994635df/kubernetes.io~secret
I0930 02:45:16.753671 15331 rkt.go:775] Rkt getting pods
W0930 02:45:16.772857 15331 pod_info.go:162] rkt: Cannot get exit code for container &{50e2d545-dee3-4186-a5da-aa5938448042:nginx nginx nginx 2079597216 1443595510}
I0930 02:45:16.773022 15331 rkt.go:983] Pod "nginx_default" is not running, will start it
I0930 02:45:16.773056 15331 rkt.go:685] Rkt starts to run pod: name "nginx_default".
I0930 02:45:16.773128 15331 rkt.go:185] rkt: Run command:[image list --no-legend=true --fields=name]
I0930 02:45:16.802267 15331 rkt.go:185] rkt: Run command:[image cat-manifest nginx:latest]
I0930 02:45:16.803303 15331 server.go:690] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"nginx", UID:"b748dbf0-673e-11e5-907f-d050994635df", APIVersion:"v1", ResourceVersion:"24", FieldPath:"spec.containers{nginx}"}): reason: 'Pulled' Container image "nginx" already present on machine
==> kube-apiserver.log <==
I0930 02:45:16.807392 15299 handlers.go:131] PUT /api/v1/namespaces/default/events/nginx.1408ae6ffd5a34b9: (3.25327ms) 200 [[kubelet/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36566]
==> kubelet.log <==
I0930 02:45:16.824337 15331 rkt.go:185] rkt: Run command:[image list --no-legend=true --fields=key,name]
I0930 02:45:16.849806 15331 rkt.go:185] rkt: Run command:[prepare --quiet --pod-manifest /tmp/manifest-nginx-474819792 --stage1-image /data/go/stage1-lkvm.aci]
I0930 02:45:16.861798 15331 rkt.go:775] Rkt getting pods
I0930 02:45:16.970100 15331 rkt.go:775] Rkt getting pods
I0930 02:45:17.082017 15331 rkt.go:775] Rkt getting pods
I0930 02:45:17.188723 15331 rkt.go:775] Rkt getting pods
I0930 02:45:17.296790 15331 rkt.go:775] Rkt getting pods
I0930 02:45:17.409031 15331 rkt.go:775] Rkt getting pods
==> kube-apiserver.log <==
I0930 02:45:17.498824 15299 handlers.go:131] GET /api/v1/watch/persistentvolumes?resourceVersion=42: (9.998498499s) 0 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36613]
I0930 02:45:17.498988 15299 handlers.go:131] GET /api/v1/watch/persistentvolumeclaims?resourceVersion=42: (9.998315306s) 0 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36620]
I0930 02:45:17.499065 15299 handlers.go:131] GET /api/v1/watch/persistentvolumes?resourceVersion=42: (9.998125338s) 0 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36612]
==> kubelet.log <==
I0930 02:45:17.517039 15331 rkt.go:775] Rkt getting pods
==> kube-apiserver.log <==
I0930 02:45:17.557321 15299 handlers.go:131] GET /apis/experimental/v1alpha1/jobs: (198.139µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36624]
I0930 02:45:17.557320 15299 handlers.go:131] GET /apis/experimental/v1alpha1/daemonsets: (211.549µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36623]
==> kubelet.log <==
I0930 02:45:17.625598 15331 rkt.go:775] Rkt getting pods
I0930 02:45:17.737370 15331 rkt.go:775] Rkt getting pods
I0930 02:45:17.845311 15331 rkt.go:775] Rkt getting pods
I0930 02:45:17.953485 15331 rkt.go:775] Rkt getting pods
I0930 02:45:18.066086 15331 rkt.go:775] Rkt getting pods
I0930 02:45:18.174519 15331 rkt.go:775] Rkt getting pods
I0930 02:45:18.283307 15331 rkt.go:775] Rkt getting pods
==> kube-apiserver.log <==
I0930 02:45:18.300507 15299 handlers.go:131] GET /api: (167.759µs) 200 [[kubectl/v1.0.6 (linux/amd64) kubernetes/388061f] 127.0.0.1:36634]
I0930 02:45:18.303170 15299 handlers.go:131] GET /api/v1/namespaces/default/pods: (1.36714ms) 200 [[kubectl/v1.0.6 (linux/amd64) kubernetes/388061f] 127.0.0.1:36634]
==> kubelet.log <==
I0930 02:45:18.398497 15331 rkt.go:775] Rkt getting pods
==> kube-apiserver.log <==
I0930 02:45:18.500147 15299 handlers.go:131] GET /api/v1/persistentvolumes: (1.129192ms) 200 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36624]
I0930 02:45:18.500427 15299 handlers.go:131] GET /api/v1/persistentvolumeclaims: (1.421058ms) 200 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36623]
I0930 02:45:18.500533 15299 handlers.go:131] GET /api/v1/persistentvolumes: (1.110055ms) 200 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36635]
==> kubelet.log <==
I0930 02:45:18.506597 15331 rkt.go:775] Rkt getting pods
==> kube-apiserver.log <==
I0930 02:45:18.529441 15299 handlers.go:131] GET /api/v1/nodes: (1.475534ms) 200 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36638]
==> kube-controller-manager.log <==
I0930 02:45:18.530211 15325 nodecontroller.go:497] Nodes ReadyCondition updated. Updating timestamp: {Capacity:map[memory:{Amount:33668587520.000 Format:BinarySI} pods:{Amount:40.000 Format:DecimalSI} cpu:{Amount:12.000 Format:DecimalSI}] Phase: Conditions:[{Type:Ready Status:True LastHeartbeatTime:2015-09-30 02:45:04 -0400 EDT LastTransitionTime:2015-09-30 02:44:24 -0400 EDT Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:127.0.0.1} {Type:InternalIP Address:127.0.0.1}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:da54b58fe0bc430896010ca3ab68992a SystemUUID:00020003-0004-0005-0006-000700080009 BootID:b780391f-3232-4738-8719-ee9ebc67679e KernelVersion:4.1.0-1-amd64 OsImage:Debian GNU/Linux stretch/sid ContainerRuntimeVersion:docker://1.7.1 KubeletVersion:v1.1.0-alpha.1.1462+b67028f40d08d8-dirty KubeProxyVersion:v1.1.0-alpha.1.1462+b67028f40d08d8-dirty}}
vs {Capacity:map[pods:{Amount:40.000 Format:DecimalSI} cpu:{Amount:12.000 Format:DecimalSI} memory:{Amount:33668587520.000 Format:BinarySI}] Phase: Conditions:[{Type:Ready Status:True LastHeartbeatTime:2015-09-30 02:45:14 -0400 EDT LastTransitionTime:2015-09-30 02:44:24 -0400 EDT Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:127.0.0.1} {Type:InternalIP Address:127.0.0.1}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:da54b58fe0bc430896010ca3ab68992a SystemUUID:00020003-0004-0005-0006-000700080009 BootID:b780391f-3232-4738-8719-ee9ebc67679e KernelVersion:4.1.0-1-amd64 OsImage:Debian GNU/Linux stretch/sid ContainerRuntimeVersion:docker://1.7.1 KubeletVersion:v1.1.0-alpha.1.1462+b67028f40d08d8-dirty KubeProxyVersion:v1.1.0-alpha.1.1462+b67028f40d08d8-dirty}}.
==> kube-apiserver.log <==
I0930 02:45:18.558288 15299 handlers.go:131] GET /apis/experimental/v1alpha1/daemonsets: (260.019µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36638]
I0930 02:45:18.558476 15299 handlers.go:131] GET /apis/experimental/v1alpha1/jobs: (182.634µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36640]
==> kubelet.log <==
I0930 02:45:18.614940 15331 rkt.go:775] Rkt getting pods
I0930 02:45:18.726886 15331 rkt.go:775] Rkt getting pods
==> kube-apiserver.log <==
I0930 02:45:19.559279 15299 handlers.go:131] GET /apis/experimental/v1alpha1/jobs: (204.844µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36638]
I0930 02:45:19.559279 15299 handlers.go:131] GET /apis/experimental/v1alpha1/daemonsets: (214.622µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36640]
==> kubelet.log <==
I0930 02:45:19.597140 15331 rkt.go:185] rkt: Run command:[version]
==> kube-apiserver.log <==
I0930 02:45:19.967296 15299 handlers.go:131] GET /api: (140.45µs) 200 [[kubectl/v1.0.6 (linux/amd64) kubernetes/388061f] 127.0.0.1:36641]
I0930 02:45:19.969087 15299 handlers.go:131] GET /api/v1/namespaces/default/pods: (1.245058ms) 200 [[kubectl/v1.0.6 (linux/amd64) kubernetes/388061f] 127.0.0.1:36641]
==> kubelet.log <==
I0930 02:45:20.367473 15331 rkt.go:580] 'rkt prepare' returns "198766d2-1a3e-457b-a628-14ef7ff38d97"
I0930 02:45:20.367693 15331 rkt.go:626] rkt: Creating service file "k8s_b748dbf0-673e-11e5-907f-d050994635df.service" for pod "nginx_default"
I0930 02:45:20.418142 15331 server.go:690] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"nginx", UID:"b748dbf0-673e-11e5-907f-d050994635df", APIVersion:"v1", ResourceVersion:"24", FieldPath:"spec.containers{nginx}"}): reason: 'Created' Created with rkt id 198766d2
I0930 02:45:20.420715 15331 kubelet.go:2553] Generating status for "nginx_default"
I0930 02:45:20.420908 15331 rkt.go:185] rkt: Run command:[status 198766d2-1a3e-457b-a628-14ef7ff38d97]
==> kube-apiserver.log <==
I0930 02:45:20.420950 15299 handlers.go:131] POST /api/v1/namespaces/default/events: (2.274168ms) 201 [[kubelet/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36566]
==> kubelet.log <==
I0930 02:45:20.420941 15331 server.go:690] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"nginx", UID:"b748dbf0-673e-11e5-907f-d050994635df", APIVersion:"v1", ResourceVersion:"24", FieldPath:"spec.containers{nginx}"}): reason: 'Started' Started with rkt id 198766d2
==> kube-apiserver.log <==
I0930 02:45:20.425207 15299 handlers.go:131] POST /api/v1/namespaces/default/events: (1.815171ms) 201 [[kubelet/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36566]
==> kubelet.log <==
I0930 02:45:20.539511 15331 kubelet.go:2581] Query container info for pod "nginx_default" failed with error (failed to run [status 198766d2-1a3e-457b-a628-14ef7ff38d97]: exit status 1
stdout: state=running
networks=
stderr: Unable to print status: no such file or directory
)
==> kube-apiserver.log <==
I0930 02:45:20.541534 15299 handlers.go:131] GET /api/v1/namespaces/default/pods/nginx: (1.659705ms) 200 [[kubelet/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36566]
I0930 02:45:20.546190 15299 handlers.go:131] PUT /api/v1/namespaces/default/pods/nginx/status: (3.503441ms) 200 [[kubelet/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36566]
==> kubelet.log <==
I0930 02:45:20.546980 15331 manager.go:223] Status for pod "nginx_default" updated successfully
==> kube-controller-manager.log <==
I0930 02:45:20.547001 15325 controller.go:153] No jobs found for pod nginx, job controller will avoid syncing
I0930 02:45:20.547008 15325 replication_controller.go:206] No controllers found for pod nginx, replication manager will avoid syncing
I0930 02:45:20.547271 15325 controller.go:250] Pod nginx updated.
I0930 02:45:20.547326 15325 controller.go:214] No daemon sets found for pod nginx, daemon set controller will avoid syncing
==> kubelet.log <==
I0930 02:45:20.547381 15331 config.go:252] Setting pods for source api
==> kube-apiserver.log <==
I0930 02:45:20.560219 15299 handlers.go:131] GET /apis/experimental/v1alpha1/jobs: (167.829µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36640]
I0930 02:45:20.560463 15299 handlers.go:131] GET /apis/experimental/v1alpha1/daemonsets: (339.638µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36638]
I0930 02:45:21.544815 15299 handlers.go:131] GET /api: (212.945µs) 200 [[kubectl/v1.0.6 (linux/amd64) kubernetes/388061f] 127.0.0.1:36643]
I0930 02:45:21.547400 15299 handlers.go:131] GET /api/v1/namespaces/default/pods: (1.63547ms) 200 [[kubectl/v1.0.6 (linux/amd64) kubernetes/388061f] 127.0.0.1:36643]
I0930 02:45:21.561333 15299 handlers.go:131] GET /apis/experimental/v1alpha1/jobs: (209.454µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36638]
I0930 02:45:21.561416 15299 handlers.go:131] GET /apis/experimental/v1alpha1/daemonsets: (191.784µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36640]
I0930 02:45:22.562471 15299 handlers.go:131] GET /apis/experimental/v1alpha1/daemonsets: (232.571µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36638]
I0930 02:45:22.562471 15299 handlers.go:131] GET /apis/experimental/v1alpha1/jobs: (239.904µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36640]
I0930 02:45:23.241512 15299 handlers.go:131] GET /api: (170.482µs) 200 [[kubectl/v1.0.6 (linux/amd64) kubernetes/388061f] 127.0.0.1:36647]
I0930 02:45:23.244078 15299 handlers.go:131] GET /api/v1/namespaces/default/pods: (1.644201ms) 200 [[kubectl/v1.0.6 (linux/amd64) kubernetes/388061f] 127.0.0.1:36647]
I0930 02:45:23.513579 15299 handlers.go:131] GET /api/v1/resourcequotas: (1.236817ms) 200 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36640]
I0930 02:45:23.532173 15299 handlers.go:131] GET /api/v1/nodes: (1.416867ms) 200 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36640]
==> kube-proxy.log <==
I0930 02:45:23.537496 15330 iptables.go:368] running iptables -N [KUBE-PORTALS-CONTAINER -t nat]
I0930 02:45:23.539550 15330 iptables.go:368] running iptables -C [PREROUTING -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-CONTAINER]
I0930 02:45:23.541428 15330 iptables.go:368] running iptables -N [KUBE-PORTALS-HOST -t nat]
I0930 02:45:23.543232 15330 iptables.go:368] running iptables -C [OUTPUT -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-HOST]
I0930 02:45:23.545025 15330 iptables.go:368] running iptables -N [KUBE-NODEPORT-CONTAINER -t nat]
I0930 02:45:23.546772 15330 iptables.go:368] running iptables -C [PREROUTING -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-CONTAINER]
I0930 02:45:23.548827 15330 iptables.go:368] running iptables -N [KUBE-NODEPORT-HOST -t nat]
I0930 02:45:23.550630 15330 iptables.go:368] running iptables -C [OUTPUT -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-HOST]
I0930 02:45:23.553626 15330 iptables.go:368] running iptables -C [KUBE-PORTALS-CONTAINER -t nat -m comment --comment default/kubernetes: -p tcp -m tcp --dport 443 -d 10.0.0.1/32 -j REDIRECT --to-ports 39016]
I0930 02:45:23.556065 15330 iptables.go:368] running iptables -C [KUBE-PORTALS-HOST -t nat -m comment --comment default/kubernetes: -p tcp -m tcp --dport 443 -d 10.0.0.1/32 -j DNAT --to-destination 192.168.1.129:39016]
==> kube-apiserver.log <==
I0930 02:45:23.563614 15299 handlers.go:131] GET /apis/experimental/v1alpha1/jobs: (184.66µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36640]
I0930 02:45:23.563618 15299 handlers.go:131] GET /apis/experimental/v1alpha1/daemonsets: (229.358µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36638]
==> kubelet.log <==
I0930 02:45:24.136986 15331 docker.go:368] Docker Container: /devenv is not managed by kubelet.
I0930 02:45:24.137024 15331 docker.go:368] Docker Container: /suspicious_jang is not managed by kubelet.
I0930 02:45:24.137049 15331 docker.go:368] Docker Container: /happy_cray is not managed by kubelet.
I0930 02:45:24.137059 15331 docker.go:368] Docker Container: /naughty_meitner is not managed by kubelet.
I0930 02:45:24.137070 15331 docker.go:368] Docker Container: /lonely_pasteur is not managed by kubelet.
I0930 02:45:24.137079 15331 docker.go:368] Docker Container: /insane_bardeen is not managed by kubelet.
I0930 02:45:24.137091 15331 docker.go:368] Docker Container: /cocky_stallman is not managed by kubelet.
I0930 02:45:24.137101 15331 docker.go:368] Docker Container: /grave_pare is not managed by kubelet.
I0930 02:45:24.137109 15331 docker.go:368] Docker Container: /angry_torvalds is not managed by kubelet.
I0930 02:45:24.137118 15331 docker.go:368] Docker Container: /sleepy_babbage is not managed by kubelet.
I0930 02:45:24.137126 15331 docker.go:368] Docker Container: /serene_euclid is not managed by kubelet.
I0930 02:45:24.137135 15331 docker.go:368] Docker Container: /stoic_morse is not managed by kubelet.
I0930 02:45:24.137143 15331 docker.go:368] Docker Container: /clever_leakey is not managed by kubelet.
I0930 02:45:24.137151 15331 docker.go:368] Docker Container: /nostalgic_torvalds is not managed by kubelet.
I0930 02:45:24.137160 15331 docker.go:368] Docker Container: /compassionate_carson is not managed by kubelet.
I0930 02:45:24.137170 15331 docker.go:368] Docker Container: /stoic_poincare is not managed by kubelet.
I0930 02:45:24.137180 15331 docker.go:368] Docker Container: /jovial_elion is not managed by kubelet.
I0930 02:45:24.137189 15331 docker.go:368] Docker Container: /trusting_leakey is not managed by kubelet.
I0930 02:45:24.137197 15331 docker.go:368] Docker Container: /thirsty_almeida is not managed by kubelet.
==> kube-apiserver.log <==
I0930 02:45:24.332201 15299 handlers.go:131] GET /api/v1/nodes/127.0.0.1: (1.624575ms) 200 [[kubelet/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36566]
I0930 02:45:24.338004 15299 handlers.go:131] PUT /api/v1/nodes/127.0.0.1/status: (3.79873ms) 200 [[kubelet/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36566]
==> kubelet.log <==
I0930 02:45:24.448791 15331 factory.go:71] Error trying to work out if we can handle /system.slice/k8s_b748dbf0-673e-11e5-907f-d050994635df.service/system.slice: error inspecting container: No such container: system.slice
I0930 02:45:24.448831 15331 factory.go:82] Factory "docker" was unable to handle container "/system.slice/k8s_b748dbf0-673e-11e5-907f-d050994635df.service/system.slice"
I0930 02:45:24.448851 15331 factory.go:78] Using factory "raw" for container "/system.slice/k8s_b748dbf0-673e-11e5-907f-d050994635df.service/system.slice"
I0930 02:45:24.449044 15331 manager.go:798] Added container: "/system.slice/k8s_b748dbf0-673e-11e5-907f-d050994635df.service/system.slice" (aliases: [], namespace: "")
I0930 02:45:24.449167 15331 handler.go:322] Added event &{/system.slice/k8s_b748dbf0-673e-11e5-907f-d050994635df.service/system.slice 2015-09-30 02:44:40.223373989 -0400 EDT containerCreation {<nil>}}
I0930 02:45:24.449363 15331 container.go:368] Start housekeeping for container "/system.slice/k8s_b748dbf0-673e-11e5-907f-d050994635df.service/system.slice"
I0930 02:45:24.449965 15331 factory.go:71] Error trying to work out if we can handle /system.slice/k8s_b748dbf0-673e-11e5-907f-d050994635df.service/system.slice/nginx.service: error inspecting container: No such container: nginx.service
I0930 02:45:24.449993 15331 factory.go:82] Factory "docker" was unable to handle container "/system.slice/k8s_b748dbf0-673e-11e5-907f-d050994635df.service/system.slice/nginx.service"
I0930 02:45:24.450012 15331 factory.go:78] Using factory "raw" for container "/system.slice/k8s_b748dbf0-673e-11e5-907f-d050994635df.service/system.slice/nginx.service"
I0930 02:45:24.450181 15331 manager.go:798] Added container: "/system.slice/k8s_b748dbf0-673e-11e5-907f-d050994635df.service/system.slice/nginx.service" (aliases: [], namespace: "")
I0930 02:45:24.450292 15331 handler.go:322] Added event &{/system.slice/k8s_b748dbf0-673e-11e5-907f-d050994635df.service/system.slice/nginx.service 2015-09-30 02:44:40.223373989 -0400 EDT containerCreation {<nil>}}
I0930 02:45:24.450352 15331 container.go:368] Start housekeeping for container "/system.slice/k8s_b748dbf0-673e-11e5-907f-d050994635df.service/system.slice/nginx.service"
==> kube-apiserver.log <==
I0930 02:45:24.496395 15299 handlers.go:131] GET /api/v1/watch/replicationcontrollers?resourceVersion=36: (29.998583178s) 0 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36592]
I0930 02:45:24.496586 15299 handlers.go:131] GET /api/v1/watch/services?resourceVersion=36: (29.998039465s) 0 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36591]
I0930 02:45:24.538051 15299 handlers.go:131] GET /api/v1/watch/endpoints?resourceVersion=36: (29.997764643s) 0 [[kube-proxy/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36605]
I0930 02:45:24.538061 15299 handlers.go:131] GET /api/v1/watch/services?resourceVersion=36: (29.997337984s) 0 [[kube-proxy/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36550]
I0930 02:45:24.564658 15299 handlers.go:131] GET /apis/experimental/v1alpha1/jobs: (250.101µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36638]
I0930 02:45:24.564680 15299 handlers.go:131] GET /apis/experimental/v1alpha1/daemonsets: (212.456µs) 404 [[kube-controller-manager/v1.1.0 (linux/amd64) kubernetes/b67028f] 127.0.0.1:36640]
==> kubelet.log <==
I0930 02:45:24.611265 15331 rkt.go:185] rkt: Run command:[version]
^C
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment