Skip to content

Instantly share code, notes, and snippets.

@yifan-gu
Created October 9, 2015 00:13
Show Gist options
  • Save yifan-gu/da1d30bf077d99cfa71f to your computer and use it in GitHub Desktop.
Save yifan-gu/da1d30bf077d99cfa71f to your computer and use it in GitHub Desktop.
e2e tests rkt nspawn stage1 aci
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
Oct 8 15:07:08.269: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 8 15:07:08.272: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 8 15:07:08.272: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1444342027 - Will randomize all specs
Will run 139 of 185 specs
EmptyDir volumes
should support (non-root,0666,default)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:90
[BeforeEach] EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:07:08.279: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-cawzz
Oct 8 15:07:08.283: INFO: Get service account default in ns e2e-tests-emptydir-cawzz failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:07:10.285: INFO: Service account default in ns e2e-tests-emptydir-cawzz with secrets found. (2.006049817s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:07:10.285: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-cawzz
Oct 8 15:07:10.286: INFO: Service account default in ns e2e-tests-emptydir-cawzz with secrets found. (924.88µs)
[It] should support (non-root,0666,default)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:90
STEP: Creating a pod to test emptydir 0666 on node default medium
Oct 8 15:07:10.293: INFO: Waiting up to 5m0s for pod pod-ebffb05e-6e08-11e5-bcd2-28d244b00276 status to be success or failure
Oct 8 15:07:10.295: INFO: No Status.Info for container 'test-container' in pod 'pod-ebffb05e-6e08-11e5-bcd2-28d244b00276' yet
Oct 8 15:07:10.295: INFO: Waiting for pod pod-ebffb05e-6e08-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-cawzz' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.220161ms elapsed)
Oct 8 15:07:12.297: INFO: No Status.Info for container 'test-container' in pod 'pod-ebffb05e-6e08-11e5-bcd2-28d244b00276' yet
Oct 8 15:07:12.297: INFO: Waiting for pod pod-ebffb05e-6e08-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-cawzz' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.003900026s elapsed)
Oct 8 15:07:14.299: INFO: No Status.Info for container 'test-container' in pod 'pod-ebffb05e-6e08-11e5-bcd2-28d244b00276' yet
Oct 8 15:07:14.299: INFO: Waiting for pod pod-ebffb05e-6e08-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-cawzz' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.005517737s elapsed)
Oct 8 15:07:16.300: INFO: No Status.Info for container 'test-container' in pod 'pod-ebffb05e-6e08-11e5-bcd2-28d244b00276' yet
Oct 8 15:07:16.300: INFO: Waiting for pod pod-ebffb05e-6e08-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-cawzz' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.007418038s elapsed)
Oct 8 15:07:18.302: INFO: No Status.Info for container 'test-container' in pod 'pod-ebffb05e-6e08-11e5-bcd2-28d244b00276' yet
Oct 8 15:07:18.302: INFO: Waiting for pod pod-ebffb05e-6e08-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-cawzz' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.009190114s elapsed)
Oct 8 15:07:20.304: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-ebffb05e-6e08-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-cawzz' so far
Oct 8 15:07:20.304: INFO: Waiting for pod pod-ebffb05e-6e08-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-cawzz' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.011013908s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-ebffb05e-6e08-11e5-bcd2-28d244b00276 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-rw-rw-
[AfterEach] EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 15:07:22.365: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:07:22.367: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:07:22.367: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-cawzz" for this suite.
• [SLOW TEST:19.129 seconds]
EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (non-root,0666,default)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:90
------------------------------
SS
------------------------------
Examples e2e [Example]ClusterDns
should create pod that uses dns
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/examples.go:545
[BeforeEach] Examples e2e
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/examples.go:61
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
Oct 8 15:07:27.408: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-examples-fia2h
Oct 8 15:07:27.412: INFO: Get service account default in ns e2e-tests-examples-fia2h failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:07:29.414: INFO: Service account default in ns e2e-tests-examples-fia2h with secrets found. (2.005080917s)
[It] should create pod that uses dns
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/examples.go:545
Oct 8 15:07:29.416: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-dnsexample0-vfjfr
Oct 8 15:07:29.418: INFO: Get service account default in ns e2e-tests-dnsexample0-vfjfr failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:07:31.420: INFO: Service account default in ns e2e-tests-dnsexample0-vfjfr with secrets found. (2.00430834s)
Oct 8 15:07:31.425: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-dnsexample1-ffn26
Oct 8 15:07:31.427: INFO: Get service account default in ns e2e-tests-dnsexample1-ffn26 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:07:33.430: INFO: Service account default in ns e2e-tests-dnsexample1-ffn26 with secrets found. (2.005019769s)
Oct 8 15:07:33.430: INFO: Running '/home/yifan/google-cloud-sdk/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth create -f /home/yifan/gopher/src/github.com/coreos/kubernetes/examples/cluster-dns/dns-backend-rc.yaml --namespace=e2e-tests-dnsexample0-vfjfr'
[AfterEach] Examples e2e
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/examples.go:68
STEP: Destroying namespace for this suite e2e-tests-examples-fia2h
• Failure [21.168 seconds]
Examples e2e
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/examples.go:547
[Example]ClusterDns
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/examples.go:546
should create pod that uses dns [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/examples.go:545
Oct 8 15:07:33.544: Error running &{/home/yifan/google-cloud-sdk/bin/kubectl [kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth create -f /home/yifan/gopher/src/github.com/coreos/kubernetes/examples/cluster-dns/dns-backend-rc.yaml --namespace=e2e-tests-dnsexample0-vfjfr] [] <nil> the path "/home/yifan/gopher/src/github.com/coreos/kubernetes/examples/cluster-dns/dns-backend-rc.yaml" does not exist
[] <nil> 0xc20842b1a0 exit status 1 <nil> true [0xc20824c548 0xc20824c568 0xc20824c588] [0xc20824c548 0xc20824c568 0xc20824c588] [0xc20824c560 0xc20824c580] [0x6bd870 0x6bd870] 0xc2083b0ba0}:
Command stdout:
stderr:
the path "/home/yifan/gopher/src/github.com/coreos/kubernetes/examples/cluster-dns/dns-backend-rc.yaml" does not exist
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1046
------------------------------
Proxy version v1
should proxy logs on node
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:58
[BeforeEach] version v1
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:07:48.576: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-stpbr
Oct 8 15:07:48.577: INFO: Get service account default in ns e2e-tests-proxy-stpbr failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:07:50.578: INFO: Service account default in ns e2e-tests-proxy-stpbr with secrets found. (2.002825668s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:07:50.578: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-stpbr
Oct 8 15:07:50.579: INFO: Service account default in ns e2e-tests-proxy-stpbr with secrets found. (929.302µs)
[It] should proxy logs on node
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:58
Oct 8 15:07:50.586: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 5.277066ms)
Oct 8 15:07:50.588: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 2.222988ms)
Oct 8 15:07:50.590: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 2.230174ms)
Oct 8 15:07:50.592: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 1.99026ms)
Oct 8 15:07:50.594: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 1.982752ms)
Oct 8 15:07:50.597: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 2.076147ms)
Oct 8 15:07:50.599: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 2.107525ms)
Oct 8 15:07:50.774: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 175.744597ms)
Oct 8 15:07:50.975: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 200.053435ms)
Oct 8 15:07:51.176: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 201.720356ms)
Oct 8 15:07:51.375: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 198.29474ms)
Oct 8 15:07:51.577: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 202.623744ms)
Oct 8 15:07:51.792: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 214.601665ms)
Oct 8 15:07:51.976: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 184.024566ms)
Oct 8 15:07:52.176: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 200.389977ms)
Oct 8 15:07:52.376: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 199.778456ms)
Oct 8 15:07:52.576: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 199.400244ms)
Oct 8 15:07:52.775: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 199.03039ms)
Oct 8 15:07:52.975: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 199.71982ms)
Oct 8 15:07:53.174: INFO: /api/v1/proxy/nodes/127.0.0.1/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 199.862638ms)
[AfterEach] version v1
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 15:07:53.174: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:07:53.374: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:07:53.374: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-proxy-stpbr" for this suite.
• [SLOW TEST:5.402 seconds]
Proxy
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:41
version v1
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:40
should proxy logs on node
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:58
------------------------------
SS
------------------------------
DaemonRestart
Kubelet should not restart containers across restart
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:326
[BeforeEach] DaemonRestart
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:249
STEP: Skipping test, which is not implemented for local
[It] Kubelet should not restart containers across restart
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:326
[AfterEach] DaemonRestart
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:255
•! Panic [0.004 seconds]
DaemonRestart
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:327
Kubelet should not restart containers across restart [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:326
Test Panicked
runtime error: invalid memory address or nil pointer dereference
/usr/local/go/src/runtime/panic.go:387
Full Stack Trace
/usr/local/go/src/runtime/panic.go:387 (0x415568)
gopanic: reflectcall(unsafe.Pointer(d.fn), deferArgs(d), uint32(d.siz), uint32(d.siz))
/usr/local/go/src/runtime/panic.go:42 (0x41470e)
panicmem: panic(memoryError)
/usr/local/go/src/runtime/sigpanic_unix.go:26 (0x41aec4)
sigpanic: panicmem()
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/client/unversioned/nodes.go:67 (0x69f563)
(*nodes).List: err := c.r.Get().Resource(c.resourceName()).LabelsSelectorParam(label).FieldsSelectorParam(field).Do().Into(result)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:1092 (0x4c817a)
getNodePublicIps: nodes, err := c.Nodes().List(labels.Everything(), fields.Everything())
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:310 (0x51e53d)
func.182: nodeIPs, err := getNodePublicIps(framework.Client)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:140 (0x4effee)
TestE2E: ginkgo.RunSpecsWithDefaultAndCustomReporters(t, "Kubernetes e2e suite", r)
/usr/local/go/src/testing/testing.go:447 (0x460cef)
tRunner: test.F(t)
/usr/local/go/src/runtime/asm_amd64.s:2232 (0x42ab71)
goexit:
------------------------------
PreStop
should call prestop when killing a pod
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:149
[BeforeEach] PreStop
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:07:53.982: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-prestop-fsnue
Oct 8 15:07:53.983: INFO: Get service account default in ns e2e-tests-prestop-fsnue failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:07:55.984: INFO: Service account default in ns e2e-tests-prestop-fsnue with secrets found. (2.002351617s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:07:55.984: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-prestop-fsnue
Oct 8 15:07:55.986: INFO: Service account default in ns e2e-tests-prestop-fsnue with secrets found. (1.289263ms)
[It] should call prestop when killing a pod
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:149
STEP: Creating server pod server in namespace e2e-tests-prestop-fsnue
STEP: Waiting for pods to come up.
Oct 8 15:07:55.988: INFO: Waiting up to 5m0s for pod server status to be running
Oct 8 15:07:55.990: INFO: Waiting for pod server in namespace 'e2e-tests-prestop-fsnue' status to be 'running'(found phase: "Pending", readiness: false) (2.050331ms elapsed)
Oct 8 15:07:57.992: INFO: Waiting for pod server in namespace 'e2e-tests-prestop-fsnue' status to be 'running'(found phase: "Pending", readiness: false) (2.003972612s elapsed)
Oct 8 15:07:59.994: INFO: Waiting for pod server in namespace 'e2e-tests-prestop-fsnue' status to be 'running'(found phase: "Pending", readiness: false) (4.006150624s elapsed)
Oct 8 15:08:01.996: INFO: Waiting for pod server in namespace 'e2e-tests-prestop-fsnue' status to be 'running'(found phase: "Pending", readiness: false) (6.007886636s elapsed)
Oct 8 15:08:03.998: INFO: Waiting for pod server in namespace 'e2e-tests-prestop-fsnue' status to be 'running'(found phase: "Pending", readiness: false) (8.009692353s elapsed)
Oct 8 15:08:06.000: INFO: Waiting for pod server in namespace 'e2e-tests-prestop-fsnue' status to be 'running'(found phase: "Pending", readiness: false) (10.011793593s elapsed)
Oct 8 15:08:08.002: INFO: Found pod 'server' on node '127.0.0.1'
STEP: Creating tester pod server in namespace e2e-tests-prestop-fsnue
Oct 8 15:08:08.005: INFO: Waiting up to 5m0s for pod tester status to be running
Oct 8 15:08:08.007: INFO: Waiting for pod tester in namespace 'e2e-tests-prestop-fsnue' status to be 'running'(found phase: "Pending", readiness: false) (1.787944ms elapsed)
Oct 8 15:08:10.009: INFO: Waiting for pod tester in namespace 'e2e-tests-prestop-fsnue' status to be 'running'(found phase: "Pending", readiness: false) (2.0040338s elapsed)
Oct 8 15:08:12.011: INFO: Waiting for pod tester in namespace 'e2e-tests-prestop-fsnue' status to be 'running'(found phase: "Pending", readiness: false) (4.005630467s elapsed)
Oct 8 15:08:14.013: INFO: Waiting for pod tester in namespace 'e2e-tests-prestop-fsnue' status to be 'running'(found phase: "Pending", readiness: false) (6.008092389s elapsed)
Oct 8 15:08:16.015: INFO: Waiting for pod tester in namespace 'e2e-tests-prestop-fsnue' status to be 'running'(found phase: "Pending", readiness: false) (8.010176561s elapsed)
Oct 8 15:08:18.017: INFO: Waiting for pod tester in namespace 'e2e-tests-prestop-fsnue' status to be 'running'(found phase: "Pending", readiness: false) (10.012041227s elapsed)
Oct 8 15:08:20.020: INFO: Found pod 'tester' on node '127.0.0.1'
STEP: Deleting pre-stop pod
Oct 8 15:08:25.030: INFO: Saw: {
"Hostname": "rkt-d1e6bdb5-27b5-4634-abaa-8287c620cb86",
"Sent": null,
"Received": null,
"Errors": null,
"Log": [
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again."
],
"StillContactingPeers": true
}
Oct 8 15:08:30.032: INFO: Saw: {
"Hostname": "rkt-d1e6bdb5-27b5-4634-abaa-8287c620cb86",
"Sent": null,
"Received": null,
"Errors": null,
"Log": [
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again."
],
"StillContactingPeers": true
}
Oct 8 15:08:35.030: INFO: Saw: {
"Hostname": "rkt-d1e6bdb5-27b5-4634-abaa-8287c620cb86",
"Sent": null,
"Received": null,
"Errors": null,
"Log": [
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again."
],
"StillContactingPeers": true
}
Oct 8 15:08:40.030: INFO: Saw: {
"Hostname": "rkt-d1e6bdb5-27b5-4634-abaa-8287c620cb86",
"Sent": null,
"Received": null,
"Errors": null,
"Log": [
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again."
],
"StillContactingPeers": true
}
Oct 8 15:08:45.030: INFO: Saw: {
"Hostname": "rkt-d1e6bdb5-27b5-4634-abaa-8287c620cb86",
"Sent": null,
"Received": null,
"Errors": null,
"Log": [
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again."
],
"StillContactingPeers": true
}
Oct 8 15:08:50.029: INFO: Saw: {
"Hostname": "rkt-d1e6bdb5-27b5-4634-abaa-8287c620cb86",
"Sent": null,
"Received": null,
"Errors": null,
"Log": [
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again."
],
"StillContactingPeers": true
}
Oct 8 15:08:55.030: INFO: Saw: {
"Hostname": "rkt-d1e6bdb5-27b5-4634-abaa-8287c620cb86",
"Sent": null,
"Received": null,
"Errors": null,
"Log": [
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again."
],
"StillContactingPeers": true
}
Oct 8 15:09:00.030: INFO: Saw: {
"Hostname": "rkt-d1e6bdb5-27b5-4634-abaa-8287c620cb86",
"Sent": null,
"Received": null,
"Errors": null,
"Log": [
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again."
],
"StillContactingPeers": true
}
Oct 8 15:09:05.030: INFO: Saw: {
"Hostname": "rkt-d1e6bdb5-27b5-4634-abaa-8287c620cb86",
"Sent": null,
"Received": null,
"Errors": null,
"Log": [
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again."
],
"StillContactingPeers": true
}
Oct 8 15:09:10.030: INFO: Saw: {
"Hostname": "rkt-d1e6bdb5-27b5-4634-abaa-8287c620cb86",
"Sent": null,
"Received": null,
"Errors": null,
"Log": [
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again."
],
"StillContactingPeers": true
}
Oct 8 15:09:15.030: INFO: Saw: {
"Hostname": "rkt-d1e6bdb5-27b5-4634-abaa-8287c620cb86",
"Sent": null,
"Received": null,
"Errors": null,
"Log": [
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again."
],
"StillContactingPeers": true
}
Oct 8 15:09:20.030: INFO: Saw: {
"Hostname": "rkt-d1e6bdb5-27b5-4634-abaa-8287c620cb86",
"Sent": null,
"Received": null,
"Errors": null,
"Log": [
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again."
],
"StillContactingPeers": true
}
Oct 8 15:09:20.032: INFO: Saw: {
"Hostname": "rkt-d1e6bdb5-27b5-4634-abaa-8287c620cb86",
"Sent": null,
"Received": null,
"Errors": null,
"Log": [
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again."
],
"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] PreStop
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-prestop-fsnue".
Oct 8 15:09:20.043: INFO: event for server: {scheduler } Scheduled: Successfully assigned server to 127.0.0.1
Oct 8 15:09:20.043: INFO: event for server: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/nettest:1.6" already present on machine
Oct 8 15:09:20.043: INFO: event for server: {kubelet 127.0.0.1} Created: Created with rkt id d1e6bdb5
Oct 8 15:09:20.043: INFO: event for server: {kubelet 127.0.0.1} Started: Started with rkt id d1e6bdb5
Oct 8 15:09:20.043: INFO: event for tester: {scheduler } Scheduled: Successfully assigned tester to 127.0.0.1
Oct 8 15:09:20.043: INFO: event for tester: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/busybox" already present on machine
Oct 8 15:09:20.043: INFO: event for tester: {kubelet 127.0.0.1} Created: Created with rkt id 41a00ed5
Oct 8 15:09:20.043: INFO: event for tester: {kubelet 127.0.0.1} Started: Started with rkt id 41a00ed5
Oct 8 15:09:20.043: INFO: event for tester: {kubelet 127.0.0.1} Killing: Killing with rkt id 41a00ed5
Oct 8 15:09:20.045: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 8 15:09:20.045: INFO: server 127.0.0.1 Running 30s [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 15:08:06 -0700 PDT }]
Oct 8 15:09:20.045: INFO:
Oct 8 15:09:20.045: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:09:20.054: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:09:20.054: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-prestop-fsnue" for this suite.
• Failure [91.104 seconds]
PreStop
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:150
should call prestop when killing a pod [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:149
validating pre-stop.
Expected error:
<*errors.errorString | 0xc2080da940>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:141
------------------------------
Pods
should have monotonically increasing restart count
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:570
[BeforeEach] Pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:09:25.088: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-r8y2k
Oct 8 15:09:25.089: INFO: Get service account default in ns e2e-tests-pods-r8y2k failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:09:27.090: INFO: Service account default in ns e2e-tests-pods-r8y2k with secrets found. (2.002819259s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:09:27.090: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-r8y2k
Oct 8 15:09:27.092: INFO: Service account default in ns e2e-tests-pods-r8y2k with secrets found. (1.171669ms)
[It] should have monotonically increasing restart count
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:570
STEP: Creating pod liveness-http in namespace e2e-tests-pods-r8y2k
Oct 8 15:09:27.094: INFO: Waiting up to 5m0s for pod liveness-http status to be !pending
Oct 8 15:09:27.096: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-r8y2k' status to be '!pending'(found phase: "Pending", readiness: false) (1.980737ms elapsed)
Oct 8 15:09:29.098: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-r8y2k' status to be '!pending'(found phase: "Pending", readiness: false) (2.003829413s elapsed)
Oct 8 15:09:31.100: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-r8y2k' status to be '!pending'(found phase: "Pending", readiness: false) (4.005845883s elapsed)
Oct 8 15:09:33.102: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-r8y2k' status to be '!pending'(found phase: "Pending", readiness: false) (6.007694239s elapsed)
Oct 8 15:09:35.104: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-r8y2k' status to be '!pending'(found phase: "Pending", readiness: false) (8.009950932s elapsed)
Oct 8 15:09:37.108: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-r8y2k' status to be '!pending'(found phase: "Pending", readiness: false) (10.013542323s elapsed)
Oct 8 15:09:39.111: INFO: Saw pod 'liveness-http' in namespace 'e2e-tests-pods-r8y2k' out of pending state (found '"Running"')
STEP: Started pod liveness-http in namespace e2e-tests-pods-r8y2k
STEP: checking the pod's current state and verifying that restartCount is present
STEP: Initial restart count of pod liveness-http is 0
STEP: Restart count of pod e2e-tests-pods-r8y2k/liveness-http is now 1 (14.017164057s elapsed)
STEP: Restart count of pod e2e-tests-pods-r8y2k/liveness-http is now 2 (34.047441626s elapsed)
STEP: Restart count of pod e2e-tests-pods-r8y2k/liveness-http is now 3 (54.075246526s elapsed)
STEP: Restart count of pod e2e-tests-pods-r8y2k/liveness-http is now 4 (1m14.094408033s elapsed)
STEP: Restart count of pod e2e-tests-pods-r8y2k/liveness-http is now 5 (1m34.119358896s elapsed)
STEP: deleting the pod
[AfterEach] Pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 15:11:13.280: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:11:13.282: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:11:13.282: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-r8y2k" for this suite.
• [SLOW TEST:113.251 seconds]
Pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:778
should have monotonically increasing restart count
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:570
------------------------------
S
------------------------------
EmptyDir volumes
should support (non-root,0644,tmpfs)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:58
[BeforeEach] EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:11:18.336: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-vy4ua
Oct 8 15:11:18.337: INFO: Get service account default in ns e2e-tests-emptydir-vy4ua failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:11:20.339: INFO: Service account default in ns e2e-tests-emptydir-vy4ua with secrets found. (2.003278515s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:11:20.339: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-vy4ua
Oct 8 15:11:20.340: INFO: Service account default in ns e2e-tests-emptydir-vy4ua with secrets found. (1.138052ms)
[It] should support (non-root,0644,tmpfs)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:58
STEP: Creating a pod to test emptydir 0644 on tmpfs
Oct 8 15:11:20.344: INFO: Waiting up to 5m0s for pod pod-810afd45-6e09-11e5-bcd2-28d244b00276 status to be success or failure
Oct 8 15:11:20.347: INFO: No Status.Info for container 'test-container' in pod 'pod-810afd45-6e09-11e5-bcd2-28d244b00276' yet
Oct 8 15:11:20.347: INFO: Waiting for pod pod-810afd45-6e09-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-vy4ua' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.860282ms elapsed)
Oct 8 15:11:22.349: INFO: No Status.Info for container 'test-container' in pod 'pod-810afd45-6e09-11e5-bcd2-28d244b00276' yet
Oct 8 15:11:22.349: INFO: Waiting for pod pod-810afd45-6e09-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-vy4ua' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.004733192s elapsed)
Oct 8 15:11:24.352: INFO: No Status.Info for container 'test-container' in pod 'pod-810afd45-6e09-11e5-bcd2-28d244b00276' yet
Oct 8 15:11:24.352: INFO: Waiting for pod pod-810afd45-6e09-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-vy4ua' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.007753024s elapsed)
Oct 8 15:11:26.354: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-810afd45-6e09-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-vy4ua' so far
Oct 8 15:11:26.354: INFO: Waiting for pod pod-810afd45-6e09-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-vy4ua' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.010183523s elapsed)
Oct 8 15:11:28.357: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-810afd45-6e09-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-vy4ua' so far
Oct 8 15:11:28.357: INFO: Waiting for pod pod-810afd45-6e09-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-vy4ua' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.012398136s elapsed)
Oct 8 15:11:30.358: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-810afd45-6e09-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-vy4ua' so far
Oct 8 15:11:30.358: INFO: Waiting for pod pod-810afd45-6e09-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-vy4ua' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.014124581s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-810afd45-6e09-11e5-bcd2-28d244b00276 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-r--r--
[AfterEach] EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 15:11:32.428: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:11:32.430: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:11:32.430: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-vy4ua" for this suite.
• [SLOW TEST:19.141 seconds]
EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (non-root,0644,tmpfs)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:58
------------------------------
S
------------------------------
Resource usage of system containers
should not exceed expected amount.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/monitor_resources.go:128
[BeforeEach] Resource usage of system containers
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/monitor_resources.go:94
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
[It] should not exceed expected amount.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/monitor_resources.go:128
STEP: Getting ResourceConsumption on all nodes
•! Panic [15.577 seconds]
Resource usage of system containers
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/monitor_resources.go:129
should not exceed expected amount. [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/monitor_resources.go:128
Test Panicked
runtime error: invalid memory address or nil pointer dereference
/usr/local/go/src/runtime/panic.go:387
Full Stack Trace
/usr/local/go/src/runtime/panic.go:387 (0x415568)
gopanic: reflectcall(unsafe.Pointer(d.fn), deferArgs(d), uint32(d.siz), uint32(d.siz))
/usr/local/go/src/runtime/panic.go:42 (0x41470e)
panicmem: panic(memoryError)
/usr/local/go/src/runtime/sigpanic_unix.go:26 (0x41aec4)
sigpanic: panicmem()
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/monitor_resources.go:70 (0x4a565c)
computeAverage: MemoryWorkingSetInBytes: result[container].MemoryWorkingSetInBytes + usage[container].MemoryWorkingSetInBytes,
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/monitor_resources.go:114 (0x5595e3)
func.399: averageResourceUsagePerNode[node.Name] = computeAverage(resourceUsagePerNode[node.Name])
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:140 (0x4effee)
TestE2E: ginkgo.RunSpecsWithDefaultAndCustomReporters(t, "Kubernetes e2e suite", r)
/usr/local/go/src/testing/testing.go:447 (0x460cef)
tRunner: test.F(t)
/usr/local/go/src/runtime/asm_amd64.s:2232 (0x42ab71)
goexit:
------------------------------
DaemonRestart
Controller Manager should not create/delete replicas across restart
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:284
[BeforeEach] DaemonRestart
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:249
STEP: Skipping test, which is not implemented for local
[It] Controller Manager should not create/delete replicas across restart
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:284
Oct 8 15:11:53.052: INFO: WARNING: SSH through the restart config might not work on local
Oct 8 15:11:53.052: INFO: Checking if Daemon kube-controller on node is up by polling for a 200 on its /healthz endpoint
[AfterEach] DaemonRestart
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:255
• Failure [5.006 seconds]
DaemonRestart
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:327
Controller Manager should not create/delete replicas across restart [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:284
Expected error:
<*errors.errorString | 0xc20858e3b0>: {
s: "error getting signer for provider local: 'getSigner(...) not implemented for local'",
}
error getting signer for provider local: 'getSigner(...) not implemented for local'
not to have occurred
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:63
------------------------------
Kibana Logging Instances Is Alive
should check that the Kibana logging instance is alive
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:42
[BeforeEach] Kibana Logging Instances Is Alive
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:11:58.061: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kibana-logging-8xrc3
Oct 8 15:11:58.062: INFO: Get service account default in ns e2e-tests-kibana-logging-8xrc3 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:12:00.063: INFO: Service account default in ns e2e-tests-kibana-logging-8xrc3 with secrets found. (2.00252565s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:12:00.063: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kibana-logging-8xrc3
Oct 8 15:12:00.064: INFO: Service account default in ns e2e-tests-kibana-logging-8xrc3 with secrets found. (983.93µs)
[BeforeEach] Kibana Logging Instances Is Alive
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:38
[AfterEach] Kibana Logging Instances Is Alive
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 15:12:00.069: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:12:00.071: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:12:00.071: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-kibana-logging-8xrc3" for this suite.
S [SKIPPING] in Spec Setup (BeforeEach) [7.019 seconds]
Kibana Logging Instances Is Alive
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:43
should check that the Kibana logging instance is alive [BeforeEach]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:42
Oct 8 15:12:00.064: Only supported for providers [gce] (not local)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:211
------------------------------
Kubectl client Kubectl expose
should create services for rc
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:403
[BeforeEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:77
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
Oct 8 15:12:05.079: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-5pbmm
Oct 8 15:12:05.080: INFO: Get service account default in ns e2e-tests-kubectl-5pbmm failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:12:07.082: INFO: Service account default in ns e2e-tests-kubectl-5pbmm with secrets found. (2.002485106s)
[It] should create services for rc
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:403
STEP: creating Redis RC
Oct 8 15:12:07.082: INFO: Running '/home/yifan/google-cloud-sdk/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth create -f /home/yifan/gopher/src/github.com/coreos/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-5pbmm'
[AfterEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:84
STEP: Destroying namespace for this suite e2e-tests-kubectl-5pbmm
• Failure [7.046 seconds]
Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:682
Kubectl expose
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:404
should create services for rc [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:403
Oct 8 15:12:07.106: Error running &{/home/yifan/google-cloud-sdk/bin/kubectl [kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth create -f /home/yifan/gopher/src/github.com/coreos/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-5pbmm] [] <nil> the path "/home/yifan/gopher/src/github.com/coreos/kubernetes/examples/guestbook-go/redis-master-controller.json" does not exist
[] <nil> 0xc20855bb20 exit status 1 <nil> true [0xc20824c380 0xc20824c3a0 0xc20824c3e0] [0xc20824c380 0xc20824c3a0 0xc20824c3e0] [0xc20824c398 0xc20824c3c0] [0x6bd870 0x6bd870] 0xc20855ed80}:
Command stdout:
stderr:
the path "/home/yifan/gopher/src/github.com/coreos/kubernetes/examples/guestbook-go/redis-master-controller.json" does not exist
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1046
------------------------------
Secrets
should be consumable from pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/secrets.go:99
[BeforeEach] Secrets
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:12:12.128: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-secrets-3dyq8
Oct 8 15:12:12.131: INFO: Get service account default in ns e2e-tests-secrets-3dyq8 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:12:14.134: INFO: Service account default in ns e2e-tests-secrets-3dyq8 with secrets found. (2.006462765s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:12:14.135: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-secrets-3dyq8
Oct 8 15:12:14.137: INFO: Service account default in ns e2e-tests-secrets-3dyq8 with secrets found. (2.512778ms)
[It] should be consumable from pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/secrets.go:99
STEP: Creating secret with name secret-test-a11bb40b-6e09-11e5-bcd2-28d244b00276
STEP: Creating a pod to test consume secrets
Oct 8 15:12:14.183: INFO: Waiting up to 5m0s for pod pod-secrets-a11c8470-6e09-11e5-bcd2-28d244b00276 status to be success or failure
Oct 8 15:12:14.185: INFO: No Status.Info for container 'secret-test' in pod 'pod-secrets-a11c8470-6e09-11e5-bcd2-28d244b00276' yet
Oct 8 15:12:14.185: INFO: Waiting for pod pod-secrets-a11c8470-6e09-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-secrets-3dyq8' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.255397ms elapsed)
Oct 8 15:12:16.186: INFO: No Status.Info for container 'secret-test' in pod 'pod-secrets-a11c8470-6e09-11e5-bcd2-28d244b00276' yet
Oct 8 15:12:16.186: INFO: Waiting for pod pod-secrets-a11c8470-6e09-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-secrets-3dyq8' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.003893701s elapsed)
Oct 8 15:12:18.188: INFO: No Status.Info for container 'secret-test' in pod 'pod-secrets-a11c8470-6e09-11e5-bcd2-28d244b00276' yet
Oct 8 15:12:18.188: INFO: Waiting for pod pod-secrets-a11c8470-6e09-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-secrets-3dyq8' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.005561899s elapsed)
Oct 8 15:12:20.190: INFO: No Status.Info for container 'secret-test' in pod 'pod-secrets-a11c8470-6e09-11e5-bcd2-28d244b00276' yet
Oct 8 15:12:20.190: INFO: Waiting for pod pod-secrets-a11c8470-6e09-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-secrets-3dyq8' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.007036496s elapsed)
Oct 8 15:12:22.194: INFO: No Status.Info for container 'secret-test' in pod 'pod-secrets-a11c8470-6e09-11e5-bcd2-28d244b00276' yet
Oct 8 15:12:22.194: INFO: Waiting for pod pod-secrets-a11c8470-6e09-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-secrets-3dyq8' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.011318797s elapsed)
Oct 8 15:12:24.196: INFO: Nil State.Terminated for container 'secret-test' in pod 'pod-secrets-a11c8470-6e09-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-secrets-3dyq8' so far
Oct 8 15:12:24.196: INFO: Waiting for pod pod-secrets-a11c8470-6e09-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-secrets-3dyq8' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.013438085s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-secrets-a11c8470-6e09-11e5-bcd2-28d244b00276 container secret-test: <nil>
STEP: Successfully fetched pod logs:mode of file "/etc/secret-volume/data-1": -r--r--r--
content of file "/etc/secret-volume/data-1": value-1
STEP: Cleaning up the secret
[AfterEach] Secrets
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 15:12:26.325: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:12:26.328: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:12:26.328: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-secrets-3dyq8" for this suite.
• [SLOW TEST:19.243 seconds]
Secrets
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/secrets.go:100
should be consumable from pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/secrets.go:99
------------------------------
SS
------------------------------
Services
should work after restarting apiserver
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:344
[BeforeEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:12:31.389: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-i9fqp
Oct 8 15:12:31.390: INFO: Get service account default in ns e2e-tests-services-i9fqp failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:12:33.392: INFO: Service account default in ns e2e-tests-services-i9fqp with secrets found. (2.002416615s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:12:33.392: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-i9fqp
Oct 8 15:12:33.393: INFO: Service account default in ns e2e-tests-services-i9fqp with secrets found. (911.606µs)
[BeforeEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:54
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
[It] should work after restarting apiserver
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:344
[AfterEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 15:12:33.399: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:12:33.401: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:12:33.401: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-services-i9fqp" for this suite.
[AfterEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:64
S [SKIPPING] [7.041 seconds]
Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:861
should work after restarting apiserver [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:344
Oct 8 15:12:33.393: Only supported for providers [gce] (not local)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:211
------------------------------
Addon update
should propagate add-on file changes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:322
[BeforeEach] Addon update
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:12:38.410: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-addon-update-test-lfc8v
Oct 8 15:12:38.411: INFO: Get service account default in ns e2e-tests-addon-update-test-lfc8v failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:12:40.412: INFO: Service account default in ns e2e-tests-addon-update-test-lfc8v with secrets found. (2.002266384s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:12:40.412: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-addon-update-test-lfc8v
Oct 8 15:12:40.413: INFO: Service account default in ns e2e-tests-addon-update-test-lfc8v with secrets found. (904.445µs)
[BeforeEach] Addon update
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:214
[It] should propagate add-on file changes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:322
[AfterEach] Addon update
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 15:12:40.416: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:12:40.417: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:12:40.417: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-addon-update-test-lfc8v" for this suite.
[AfterEach] Addon update
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:222
S [SKIPPING] [7.016 seconds]
Addon update
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:323
should propagate add-on file changes [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:322
Oct 8 15:12:40.413: Only supported for providers [gce] (not local)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:211
------------------------------
EmptyDir volumes
should support (non-root,0644,default)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:86
[BeforeEach] EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:12:45.426: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-4vo8x
Oct 8 15:12:45.426: INFO: Get service account default in ns e2e-tests-emptydir-4vo8x failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:12:47.428: INFO: Service account default in ns e2e-tests-emptydir-4vo8x with secrets found. (2.002374187s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:12:47.428: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-4vo8x
Oct 8 15:12:47.429: INFO: Service account default in ns e2e-tests-emptydir-4vo8x with secrets found. (1.185989ms)
[It] should support (non-root,0644,default)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:86
STEP: Creating a pod to test emptydir 0644 on node default medium
Oct 8 15:12:47.432: INFO: Waiting up to 5m0s for pod pod-b4f3aa4c-6e09-11e5-bcd2-28d244b00276 status to be success or failure
Oct 8 15:12:47.435: INFO: No Status.Info for container 'test-container' in pod 'pod-b4f3aa4c-6e09-11e5-bcd2-28d244b00276' yet
Oct 8 15:12:47.435: INFO: Waiting for pod pod-b4f3aa4c-6e09-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-4vo8x' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.763248ms elapsed)
Oct 8 15:12:49.436: INFO: No Status.Info for container 'test-container' in pod 'pod-b4f3aa4c-6e09-11e5-bcd2-28d244b00276' yet
Oct 8 15:12:49.436: INFO: Waiting for pod pod-b4f3aa4c-6e09-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-4vo8x' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.004331561s elapsed)
Oct 8 15:12:51.438: INFO: No Status.Info for container 'test-container' in pod 'pod-b4f3aa4c-6e09-11e5-bcd2-28d244b00276' yet
Oct 8 15:12:51.438: INFO: Waiting for pod pod-b4f3aa4c-6e09-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-4vo8x' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.00599665s elapsed)
Oct 8 15:12:53.440: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-b4f3aa4c-6e09-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-4vo8x' so far
Oct 8 15:12:53.440: INFO: Waiting for pod pod-b4f3aa4c-6e09-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-4vo8x' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.007866008s elapsed)
Oct 8 15:12:55.442: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-b4f3aa4c-6e09-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-4vo8x' so far
Oct 8 15:12:55.442: INFO: Waiting for pod pod-b4f3aa4c-6e09-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-4vo8x' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.009889678s elapsed)
Oct 8 15:12:57.444: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-b4f3aa4c-6e09-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-4vo8x' so far
Oct 8 15:12:57.444: INFO: Waiting for pod pod-b4f3aa4c-6e09-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-4vo8x' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.011773306s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-b4f3aa4c-6e09-11e5-bcd2-28d244b00276 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-r--r--
[AfterEach] EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 15:12:59.513: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:12:59.516: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:12:59.516: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-4vo8x" for this suite.
• [SLOW TEST:19.141 seconds]
EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (non-root,0644,default)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:86
------------------------------
Kubectl client Simple pod
should support exec
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:188
[BeforeEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:77
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
Oct 8 15:13:04.566: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-7g7mk
Oct 8 15:13:04.567: INFO: Get service account default in ns e2e-tests-kubectl-7g7mk failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:13:06.568: INFO: Service account default in ns e2e-tests-kubectl-7g7mk with secrets found. (2.002333174s)
[BeforeEach] Simple pod
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:153
STEP: creating the pod
Oct 8 15:13:06.568: INFO: Running '/home/yifan/google-cloud-sdk/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth create -f /home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/pod.yaml --namespace=e2e-tests-kubectl-7g7mk'
[AfterEach] Simple pod
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:156
STEP: using delete to clean up resources
Oct 8 15:13:06.583: INFO: Running '/home/yifan/google-cloud-sdk/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth stop --grace-period=0 -f /home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/pod.yaml --namespace=e2e-tests-kubectl-7g7mk'
[AfterEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:84
STEP: Destroying namespace for this suite e2e-tests-kubectl-7g7mk
• Failure in Spec Setup (BeforeEach) [7.040 seconds]
Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:682
Simple pod
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:214
should support exec [BeforeEach]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:188
Oct 8 15:13:06.578: Error running &{/home/yifan/google-cloud-sdk/bin/kubectl [kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth create -f /home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/pod.yaml --namespace=e2e-tests-kubectl-7g7mk] [] <nil> the path "/home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/pod.yaml" does not exist
[] <nil> 0xc208316fa0 exit status 1 <nil> true [0xc20824cbe8 0xc20824cc08 0xc20824cc28] [0xc20824cbe8 0xc20824cc08 0xc20824cc28] [0xc20824cc00 0xc20824cc20] [0x6bd870 0x6bd870] 0xc20821fce0}:
Command stdout:
stderr:
the path "/home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/pod.yaml" does not exist
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1046
------------------------------
Job
should run a job to completion when tasks sometimes fail and are locally restarted
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:75
[BeforeEach] Job
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:13:11.607: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-job-5v33d
Oct 8 15:13:11.608: INFO: Get service account default in ns e2e-tests-job-5v33d failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:13:13.610: INFO: Service account default in ns e2e-tests-job-5v33d with secrets found. (2.003178912s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:13:13.610: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-job-5v33d
Oct 8 15:13:13.611: INFO: Service account default in ns e2e-tests-job-5v33d with secrets found. (1.19151ms)
[It] should run a job to completion when tasks sometimes fail and are locally restarted
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:75
STEP: Creating a job
[AfterEach] Job
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-job-5v33d".
Oct 8 15:13:13.617: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 8 15:13:13.617: INFO:
Oct 8 15:13:13.617: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:13:13.618: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:13:13.618: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-job-5v33d" for this suite.
• Failure [7.026 seconds]
Job
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:183
should run a job to completion when tasks sometimes fail and are locally restarted [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:75
Expected error:
<*errors.StatusError | 0xc2083a5900>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:70
------------------------------
Proxy version v1
should proxy through a service and a pod
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:170
[BeforeEach] version v1
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:13:18.632: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-znq9p
Oct 8 15:13:18.633: INFO: Get service account default in ns e2e-tests-proxy-znq9p failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:13:20.634: INFO: Service account default in ns e2e-tests-proxy-znq9p with secrets found. (2.002425127s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:13:20.634: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-znq9p
Oct 8 15:13:20.635: INFO: Service account default in ns e2e-tests-proxy-znq9p with secrets found. (954.207µs)
[It] should proxy through a service and a pod
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:170
STEP: creating replication controller proxy-service-rival in namespace e2e-tests-proxy-znq9p
Oct 8 15:13:20.715: INFO: Created replication controller with name: proxy-service-rival, namespace: e2e-tests-proxy-znq9p, replica count: 1
Oct 8 15:13:21.715: INFO: proxy-service-rival Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 15:13:22.716: INFO: proxy-service-rival Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 15:13:23.717: INFO: proxy-service-rival Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 15:13:24.717: INFO: proxy-service-rival Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 15:13:25.717: INFO: proxy-service-rival Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 15:13:26.718: INFO: proxy-service-rival Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 15:13:27.718: INFO: proxy-service-rival Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 15:13:28.718: INFO: proxy-service-rival Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 15:13:29.718: INFO: proxy-service-rival Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 15:13:30.718: INFO: proxy-service-rival Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 15:13:31.718: INFO: proxy-service-rival Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 15:13:40.945: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:160/: foo (200; 2.12874ms)
Oct 8 15:13:41.095: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:162/: bar (200; 1.913045ms)
Oct 8 15:13:41.245: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname1/: foo (200; 1.460543ms)
Oct 8 15:13:41.395: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname2/: bar (200; 1.374167ms)
Oct 8 15:13:41.545: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/rewrite... (200; 1.681058ms)
Oct 8 15:13:41.695: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname1/: foo (200; 1.645103ms)
Oct 8 15:13:41.845: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname2/: bar (200; 1.312092ms)
Oct 8 15:13:41.996: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/rewrite... (200; 1.44598ms)
Oct 8 15:13:42.146: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:160/: foo (200; 1.549894ms)
Oct 8 15:13:42.296: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:162/: bar (200; 1.573565ms)
Oct 8 15:13:42.446: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname1/: foo (200; 1.295676ms)
Oct 8 15:13:42.596: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname2/: bar (200; 1.270979ms)
Oct 8 15:13:42.746: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/rewrite... (200; 1.50356ms)
Oct 8 15:13:42.896: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:160/: foo (200; 1.429475ms)
Oct 8 15:13:43.047: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:162/: bar (200; 1.47151ms)
Oct 8 15:13:43.197: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:160/: foo (200; 1.49183ms)
Oct 8 15:13:43.347: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:162/: bar (200; 1.50653ms)
Oct 8 15:13:43.497: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname1/: foo (200; 1.431578ms)
Oct 8 15:13:43.647: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname2/: bar (200; 1.336469ms)
Oct 8 15:13:43.797: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/rewrite... (200; 1.491072ms)
Oct 8 15:13:43.947: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:160/: foo (200; 1.508491ms)
Oct 8 15:13:44.098: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:162/: bar (200; 1.762653ms)
Oct 8 15:13:44.247: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname1/: foo (200; 1.242394ms)
Oct 8 15:13:44.397: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname2/: bar (200; 1.27715ms)
Oct 8 15:13:44.548: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/rewrite... (200; 2.00942ms)
Oct 8 15:13:44.698: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname1/: foo (200; 1.171021ms)
Oct 8 15:13:44.848: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname2/: bar (200; 1.165695ms)
Oct 8 15:13:44.998: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/rewrite... (200; 1.580689ms)
Oct 8 15:13:45.148: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:160/: foo (200; 1.584055ms)
Oct 8 15:13:45.298: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:162/: bar (200; 1.433086ms)
Oct 8 15:13:45.448: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname1/: foo (200; 1.346595ms)
Oct 8 15:13:45.598: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname2/: bar (200; 1.282892ms)
Oct 8 15:13:45.749: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/rewrite... (200; 2.192593ms)
Oct 8 15:13:45.899: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:160/: foo (200; 1.535642ms)
Oct 8 15:13:46.049: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:162/: bar (200; 1.339819ms)
Oct 8 15:13:46.199: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname1/: foo (200; 1.503371ms)
Oct 8 15:13:46.350: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname2/: bar (200; 2.658528ms)
Oct 8 15:13:46.499: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/rewrite... (200; 1.453175ms)
Oct 8 15:13:46.649: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:160/: foo (200; 1.615533ms)
Oct 8 15:13:46.832: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:162/: bar (200; 33.853519ms)
Oct 8 15:13:47.031: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname1/: foo (200; 82.988423ms)
Oct 8 15:13:47.231: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname2/: bar (200; 132.954613ms)
Oct 8 15:13:47.431: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/rewrite... (200; 182.960344ms)
Oct 8 15:13:47.631: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:160/: foo (200; 232.776406ms)
Oct 8 15:13:47.831: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:162/: bar (200; 282.841395ms)
Oct 8 15:13:48.031: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname2/: bar (200; 332.252389ms)
Oct 8 15:13:48.233: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/rewrite... (200; 383.755636ms)
Oct 8 15:13:48.431: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:160/: foo (200; 432.19076ms)
Oct 8 15:13:48.632: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:162/: bar (200; 482.354909ms)
Oct 8 15:13:48.832: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname1/: foo (200; 532.792887ms)
Oct 8 15:13:49.031: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname2/: bar (200; 581.653296ms)
Oct 8 15:13:49.231: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/rewrite... (200; 631.716171ms)
Oct 8 15:13:49.431: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:160/: foo (200; 679.032898ms)
Oct 8 15:13:49.631: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:162/: bar (200; 728.768867ms)
Oct 8 15:13:49.831: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname1/: foo (200; 778.699729ms)
Oct 8 15:13:50.031: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname1/: foo (200; 828.313468ms)
Oct 8 15:13:50.231: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname2/: bar (200; 878.235191ms)
Oct 8 15:13:50.431: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/rewrite... (200; 928.527553ms)
Oct 8 15:13:50.632: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:160/: foo (200; 978.567153ms)
Oct 8 15:13:50.831: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:162/: bar (200; 1.028147433s)
Oct 8 15:13:51.031: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:160/: foo (200; 1.078050944s)
Oct 8 15:13:51.231: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:162/: bar (200; 1.127932729s)
Oct 8 15:13:51.432: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname1/: foo (200; 1.178161862s)
Oct 8 15:13:51.631: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname2/: bar (200; 1.227513108s)
Oct 8 15:13:51.832: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/rewrite... (200; 1.27799367s)
Oct 8 15:13:52.031: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname1/: foo (200; 1.327241773s)
Oct 8 15:13:52.231: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname2/: bar (200; 1.377189436s)
Oct 8 15:13:52.431: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/rewrite... (200; 1.427304267s)
Oct 8 15:13:52.631: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:160/: foo (200; 1.477274778s)
Oct 8 15:13:52.831: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:162/: bar (200; 1.527123464s)
Oct 8 15:13:53.031: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname1/: foo (200; 1.576710053s)
Oct 8 15:13:53.231: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname2/: bar (200; 1.626588874s)
Oct 8 15:13:53.431: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/rewrite... (200; 1.676774478s)
Oct 8 15:13:53.632: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:160/: foo (200; 1.727708329s)
Oct 8 15:13:53.831: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:162/: bar (200; 1.776400257s)
Oct 8 15:13:54.031: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:160/: foo (200; 1.826320107s)
Oct 8 15:13:54.231: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:162/: bar (200; 1.876258963s)
Oct 8 15:13:54.431: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname1/: foo (200; 1.925809661s)
Oct 8 15:13:54.631: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname2/: bar (200; 1.976028965s)
Oct 8 15:13:54.831: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/rewrite... (200; 2.025900272s)
Oct 8 15:13:55.031: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:162/: bar (200; 2.075771621s)
Oct 8 15:13:55.232: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname1/: foo (200; 2.126288757s)
Oct 8 15:13:55.431: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname2/: bar (200; 2.175299868s)
Oct 8 15:13:55.631: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/rewrite... (200; 2.225300778s)
Oct 8 15:13:55.831: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:160/: foo (200; 2.275157021s)
Oct 8 15:13:56.031: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname1/: foo (200; 2.324908197s)
Oct 8 15:13:56.231: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname2/: bar (200; 2.374731987s)
Oct 8 15:13:56.432: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/rewrite... (200; 2.425647721s)
Oct 8 15:13:56.632: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:160/: foo (200; 2.475239879s)
Oct 8 15:13:56.831: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:162/: bar (200; 2.524713787s)
Oct 8 15:13:57.031: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname1/: foo (200; 2.574344946s)
Oct 8 15:13:57.231: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname2/: bar (200; 2.624285123s)
Oct 8 15:13:57.431: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/rewrite... (200; 2.674252096s)
Oct 8 15:13:57.632: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:160/: foo (200; 2.724828583s)
Oct 8 15:13:57.831: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:162/: bar (200; 2.774019901s)
Oct 8 15:13:58.031: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname1/: foo (200; 2.823530799s)
Oct 8 15:13:58.231: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/services/proxy-service-rival:portname2/: bar (200; 2.873416461s)
Oct 8 15:13:58.431: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:80/rewrite... (200; 2.923541187s)
Oct 8 15:13:58.632: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:160/: foo (200; 2.973549284s)
Oct 8 15:13:58.831: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-znq9p/pods/proxy-service-rival-416hr:162/: bar (200; 3.023421438s)
STEP: deleting replication controller proxy-service-rival in namespace e2e-tests-proxy-znq9p
Oct 8 15:14:01.445: INFO: Deleting RC proxy-service-rival took: 2.412300236s
Oct 8 15:14:11.448: INFO: Terminating RC proxy-service-rival pods took: 10.002976673s
[AfterEach] version v1
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 15:14:11.523: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:14:11.524: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:14:11.524: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-proxy-znq9p" for this suite.
• [SLOW TEST:57.932 seconds]
Proxy
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:41
version v1
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:40
should proxy through a service and a pod
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:170
------------------------------
SchedulerPredicates
validates resource limits of pods that are allowed to run.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:319
[BeforeEach] SchedulerPredicates
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:153
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:14:16.564: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-sched-pred-kc7mn
Oct 8 15:14:16.565: INFO: Get service account default in ns e2e-tests-sched-pred-kc7mn failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:14:18.566: INFO: Service account default in ns e2e-tests-sched-pred-kc7mn with secrets found. (2.002533718s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:14:18.566: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-sched-pred-kc7mn
Oct 8 15:14:18.567: INFO: Service account default in ns e2e-tests-sched-pred-kc7mn with secrets found. (924.698µs)
[It] validates resource limits of pods that are allowed to run.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:319
Oct 8 15:14:18.570: INFO: Node: 127.0.0.1 has capacity: 4000
STEP: Starting additional 8 Pods to fully saturate the cluster CPU and trying to start another one
Oct 8 15:14:19.365: INFO: 8 pods running
Oct 8 15:14:24.444: INFO: Sleeping 10 seconds and crossing our fingers that scheduler will run in that time.
STEP: Removing all pods in namespace e2e-tests-sched-pred-kc7mn
[AfterEach] SchedulerPredicates
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:163
Oct 8 15:14:37.008: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:14:37.010: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:14:37.010: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-sched-pred-kc7mn" for this suite.
• [SLOW TEST:25.541 seconds]
SchedulerPredicates
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:433
validates resource limits of pods that are allowed to run.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:319
------------------------------
Deployment
deployment should create new pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/deployment.go:34
[BeforeEach] Deployment
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:14:42.105: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-deployment-pu38i
Oct 8 15:14:42.106: INFO: Get service account default in ns e2e-tests-deployment-pu38i failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:14:44.107: INFO: Service account default in ns e2e-tests-deployment-pu38i with secrets found. (2.002273258s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:14:44.107: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-deployment-pu38i
Oct 8 15:14:44.108: INFO: Service account default in ns e2e-tests-deployment-pu38i with secrets found. (889.707µs)
[It] deployment should create new pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/deployment.go:34
Oct 8 15:14:44.108: INFO: Creating simple deployment nginx-deployment
[AfterEach] Deployment
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-deployment-pu38i".
Oct 8 15:14:44.113: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 8 15:14:44.113: INFO:
Oct 8 15:14:44.113: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:14:44.115: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:14:44.115: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-deployment-pu38i" for this suite.
• Failure [7.021 seconds]
Deployment
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/deployment.go:41
deployment should create new pods [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/deployment.go:34
Expected error:
<*errors.StatusError | 0xc208532000>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/deployment.go:72
------------------------------
Services
should release NodePorts on delete
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:764
[BeforeEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:14:49.126: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-rz1ly
Oct 8 15:14:49.127: INFO: Get service account default in ns e2e-tests-services-rz1ly failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:14:51.129: INFO: Service account default in ns e2e-tests-services-rz1ly with secrets found. (2.003363259s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:14:51.129: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-rz1ly
Oct 8 15:14:51.130: INFO: Service account default in ns e2e-tests-services-rz1ly with secrets found. (925.001µs)
[BeforeEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:54
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
[It] should release NodePorts on delete
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:764
STEP: creating service nodeport-reuse with type NodePort in namespace e2e-tests-services-rz1ly
STEP: deleting original service nodeport-reuse
STEP: creating service nodeport-reuse with same NodePort 32262
STEP: deleting service nodeport-reuse in namespace e2e-tests-services-rz1ly
[AfterEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 15:14:51.622: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:14:51.623: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:14:51.624: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-services-rz1ly" for this suite.
[AfterEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:64
• [SLOW TEST:7.539 seconds]
Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:861
should release NodePorts on delete
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:764
------------------------------
Services
should provide secure master service
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:71
[BeforeEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:14:56.665: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-3nsku
Oct 8 15:14:56.666: INFO: Get service account default in ns e2e-tests-services-3nsku failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:14:58.668: INFO: Service account default in ns e2e-tests-services-3nsku with secrets found. (2.002900725s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:14:58.668: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-3nsku
Oct 8 15:14:58.668: INFO: Service account default in ns e2e-tests-services-3nsku with secrets found. (909.108µs)
[BeforeEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:54
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
[It] should provide secure master service
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:71
[AfterEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 15:14:58.671: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:14:58.673: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:14:58.673: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-services-3nsku" for this suite.
[AfterEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:64
• [SLOW TEST:7.016 seconds]
Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:861
should provide secure master service
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:71
------------------------------
SSS
------------------------------
P [PENDING]
Pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:778
should get a host IP
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:150
------------------------------
S
------------------------------
SchedulerPredicates
validates that NodeSelector is respected if not matching
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:359
[BeforeEach] SchedulerPredicates
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:153
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:15:03.681: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-sched-pred-ei8bz
Oct 8 15:15:03.682: INFO: Get service account default in ns e2e-tests-sched-pred-ei8bz failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:15:05.684: INFO: Service account default in ns e2e-tests-sched-pred-ei8bz with secrets found. (2.002660469s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:15:05.684: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-sched-pred-ei8bz
Oct 8 15:15:05.685: INFO: Service account default in ns e2e-tests-sched-pred-ei8bz with secrets found. (915.009µs)
[It] validates that NodeSelector is respected if not matching
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:359
STEP: Trying to schedule Pod with nonempty NodeSelector.
Oct 8 15:15:05.690: INFO: Sleeping 10 seconds and crossing our fingers that scheduler will run in that time.
STEP: Removing all pods in namespace e2e-tests-sched-pred-ei8bz
[AfterEach] SchedulerPredicates
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:163
Oct 8 15:15:15.738: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:15:15.740: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:15:15.740: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-sched-pred-ei8bz" for this suite.
• [SLOW TEST:17.097 seconds]
SchedulerPredicates
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:433
validates that NodeSelector is respected if not matching
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:359
------------------------------
S
------------------------------
Nodes Network when a minion node becomes unreachable
[replication controller] recreates pods scheduled on the unreachable minion node AND allows scheduling of pods on a minion after it rejoins the cluster
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:536
[BeforeEach] Nodes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:395
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:15:20.780: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-resize-nodes-usz19
Oct 8 15:15:20.781: INFO: Get service account default in ns e2e-tests-resize-nodes-usz19 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:15:22.782: INFO: Service account default in ns e2e-tests-resize-nodes-usz19 with secrets found. (2.002620116s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:15:22.782: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-resize-nodes-usz19
Oct 8 15:15:22.783: INFO: Service account default in ns e2e-tests-resize-nodes-usz19 with secrets found. (965.94µs)
[BeforeEach] when a minion node becomes unreachable
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:480
[AfterEach] Nodes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:397
Oct 8 15:15:22.786: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:15:22.788: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:15:22.788: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-resize-nodes-usz19" for this suite.
S [SKIPPING] in Spec Setup (BeforeEach) [7.018 seconds]
Nodes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:539
Network
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:538
when a minion node becomes unreachable
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:537
[replication controller] recreates pods scheduled on the unreachable minion node AND allows scheduling of pods on a minion after it rejoins the cluster [BeforeEach]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:536
Oct 8 15:15:22.783: Only supported for providers [gce gke aws] (not local)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:211
------------------------------
EmptyDir volumes
volume on default medium should have the correct mode
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:70
[BeforeEach] EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:15:27.798: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-ht892
Oct 8 15:15:27.799: INFO: Get service account default in ns e2e-tests-emptydir-ht892 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:15:29.802: INFO: Service account default in ns e2e-tests-emptydir-ht892 with secrets found. (2.004090209s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:15:29.802: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-ht892
Oct 8 15:15:29.803: INFO: Service account default in ns e2e-tests-emptydir-ht892 with secrets found. (1.215986ms)
[It] volume on default medium should have the correct mode
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:70
STEP: Creating a pod to test emptydir volume type on node default medium
Oct 8 15:15:29.807: INFO: Waiting up to 5m0s for pod pod-15bbf271-6e0a-11e5-bcd2-28d244b00276 status to be success or failure
Oct 8 15:15:29.812: INFO: No Status.Info for container 'test-container' in pod 'pod-15bbf271-6e0a-11e5-bcd2-28d244b00276' yet
Oct 8 15:15:29.812: INFO: Waiting for pod pod-15bbf271-6e0a-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-ht892' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.208892ms elapsed)
Oct 8 15:15:31.815: INFO: No Status.Info for container 'test-container' in pod 'pod-15bbf271-6e0a-11e5-bcd2-28d244b00276' yet
Oct 8 15:15:31.815: INFO: Waiting for pod pod-15bbf271-6e0a-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-ht892' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.007246442s elapsed)
Oct 8 15:15:33.816: INFO: No Status.Info for container 'test-container' in pod 'pod-15bbf271-6e0a-11e5-bcd2-28d244b00276' yet
Oct 8 15:15:33.816: INFO: Waiting for pod pod-15bbf271-6e0a-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-ht892' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.008825144s elapsed)
Oct 8 15:15:35.818: INFO: No Status.Info for container 'test-container' in pod 'pod-15bbf271-6e0a-11e5-bcd2-28d244b00276' yet
Oct 8 15:15:35.818: INFO: Waiting for pod pod-15bbf271-6e0a-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-ht892' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.010425151s elapsed)
Oct 8 15:15:37.822: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-15bbf271-6e0a-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-ht892' so far
Oct 8 15:15:37.822: INFO: Waiting for pod pod-15bbf271-6e0a-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-ht892' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.014333274s elapsed)
Oct 8 15:15:39.823: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-15bbf271-6e0a-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-ht892' so far
Oct 8 15:15:39.824: INFO: Waiting for pod pod-15bbf271-6e0a-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-ht892' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.016129752s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-15bbf271-6e0a-11e5-bcd2-28d244b00276 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267
perms of file "/test-volume": -rwxrwxrwx
[AfterEach] EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 15:15:41.893: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:15:41.894: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:15:41.894: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-ht892" for this suite.
• [SLOW TEST:19.157 seconds]
EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
volume on default medium should have the correct mode
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:70
------------------------------
Reboot
each node by ordering clean reboot and ensure they function upon restart
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/reboot.go:65
[BeforeEach] Reboot
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/reboot.go:59
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
S [SKIPPING] in Spec Setup (BeforeEach) [0.003 seconds]
Reboot
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/reboot.go:100
each node by ordering clean reboot and ensure they function upon restart [BeforeEach]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/reboot.go:65
Oct 8 15:15:46.952: Only supported for providers [gce gke aws] (not local)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:211
------------------------------
Docker Containers
should be able to override the image's default command and arguments
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:83
[BeforeEach] Docker Containers
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:41
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
Oct 8 15:15:46.957: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-containers-xbm5t
Oct 8 15:15:46.957: INFO: Get service account default in ns e2e-tests-containers-xbm5t failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:15:48.959: INFO: Service account default in ns e2e-tests-containers-xbm5t with secrets found. (2.002113104s)
[It] should be able to override the image's default command and arguments
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:83
STEP: Creating a pod to test override all
Oct 8 15:15:48.961: INFO: Waiting up to 5m0s for pod client-containers-2126e4cd-6e0a-11e5-bcd2-28d244b00276 status to be success or failure
Oct 8 15:15:48.963: INFO: No Status.Info for container 'test-container' in pod 'client-containers-2126e4cd-6e0a-11e5-bcd2-28d244b00276' yet
Oct 8 15:15:48.963: INFO: Waiting for pod client-containers-2126e4cd-6e0a-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-containers-xbm5t' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.617438ms elapsed)
Oct 8 15:15:50.965: INFO: No Status.Info for container 'test-container' in pod 'client-containers-2126e4cd-6e0a-11e5-bcd2-28d244b00276' yet
Oct 8 15:15:50.965: INFO: Waiting for pod client-containers-2126e4cd-6e0a-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-containers-xbm5t' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.003353141s elapsed)
Oct 8 15:15:52.966: INFO: No Status.Info for container 'test-container' in pod 'client-containers-2126e4cd-6e0a-11e5-bcd2-28d244b00276' yet
Oct 8 15:15:52.966: INFO: Waiting for pod client-containers-2126e4cd-6e0a-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-containers-xbm5t' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.004959743s elapsed)
Oct 8 15:15:54.968: INFO: No Status.Info for container 'test-container' in pod 'client-containers-2126e4cd-6e0a-11e5-bcd2-28d244b00276' yet
Oct 8 15:15:54.968: INFO: Waiting for pod client-containers-2126e4cd-6e0a-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-containers-xbm5t' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.006616125s elapsed)
Oct 8 15:15:56.970: INFO: No Status.Info for container 'test-container' in pod 'client-containers-2126e4cd-6e0a-11e5-bcd2-28d244b00276' yet
Oct 8 15:15:56.970: INFO: Waiting for pod client-containers-2126e4cd-6e0a-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-containers-xbm5t' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.008079209s elapsed)
Oct 8 15:15:58.971: INFO: No Status.Info for container 'test-container' in pod 'client-containers-2126e4cd-6e0a-11e5-bcd2-28d244b00276' yet
Oct 8 15:15:58.971: INFO: Waiting for pod client-containers-2126e4cd-6e0a-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-containers-xbm5t' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.009775232s elapsed)
Oct 8 15:16:00.973: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-2126e4cd-6e0a-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-containers-xbm5t' so far
Oct 8 15:16:00.973: INFO: Waiting for pod client-containers-2126e4cd-6e0a-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-containers-xbm5t' status to be 'success or failure'(found phase: "Running", readiness: true) (12.011589057s elapsed)
Oct 8 15:16:02.975: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-2126e4cd-6e0a-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-containers-xbm5t' so far
Oct 8 15:16:02.975: INFO: Waiting for pod client-containers-2126e4cd-6e0a-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-containers-xbm5t' status to be 'success or failure'(found phase: "Running", readiness: true) (14.013715693s elapsed)
Oct 8 15:16:04.977: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-2126e4cd-6e0a-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-containers-xbm5t' so far
Oct 8 15:16:04.977: INFO: Waiting for pod client-containers-2126e4cd-6e0a-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-containers-xbm5t' status to be 'success or failure'(found phase: "Running", readiness: true) (16.015647788s elapsed)
Oct 8 15:16:06.979: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-2126e4cd-6e0a-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-containers-xbm5t' so far
Oct 8 15:16:06.979: INFO: Waiting for pod client-containers-2126e4cd-6e0a-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-containers-xbm5t' status to be 'success or failure'(found phase: "Running", readiness: true) (18.017506565s elapsed)
Oct 8 15:16:08.981: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-2126e4cd-6e0a-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-containers-xbm5t' so far
Oct 8 15:16:08.981: INFO: Waiting for pod client-containers-2126e4cd-6e0a-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-containers-xbm5t' status to be 'success or failure'(found phase: "Running", readiness: true) (20.019321675s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod client-containers-2126e4cd-6e0a-11e5-bcd2-28d244b00276 container test-container: <nil>
STEP: Successfully fetched pod logs:
[AfterEach] Docker Containers
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:47
• Failure [29.128 seconds]
Docker Containers
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:84
should be able to override the image's default command and arguments [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:83
"[/ep-2 override arguments]" in container output
Expected
<string>:
to contain substring
<string>: [/ep-2 override arguments]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1139
------------------------------
Job
should run a job to completion when tasks succeed
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:61
[BeforeEach] Job
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:16:16.084: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-job-0snru
Oct 8 15:16:16.088: INFO: Get service account default in ns e2e-tests-job-0snru failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:16:18.089: INFO: Service account default in ns e2e-tests-job-0snru with secrets found. (2.004594746s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:16:18.089: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-job-0snru
Oct 8 15:16:18.090: INFO: Service account default in ns e2e-tests-job-0snru with secrets found. (940.386µs)
[It] should run a job to completion when tasks succeed
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:61
STEP: Creating a job
[AfterEach] Job
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-job-0snru".
Oct 8 15:16:18.095: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 8 15:16:18.095: INFO:
Oct 8 15:16:18.095: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:16:18.096: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:16:18.096: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-job-0snru" for this suite.
• Failure [7.020 seconds]
Job
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:183
should run a job to completion when tasks succeed [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:61
Expected error:
<*errors.StatusError | 0xc208894d80>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:56
------------------------------
EmptyDir volumes
should support (non-root,0777,tmpfs)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:66
[BeforeEach] EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:16:23.105: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-4wslb
Oct 8 15:16:23.106: INFO: Get service account default in ns e2e-tests-emptydir-4wslb failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:16:25.107: INFO: Service account default in ns e2e-tests-emptydir-4wslb with secrets found. (2.002368541s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:16:25.107: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-4wslb
Oct 8 15:16:25.108: INFO: Service account default in ns e2e-tests-emptydir-4wslb with secrets found. (928.573µs)
[It] should support (non-root,0777,tmpfs)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:66
STEP: Creating a pod to test emptydir 0777 on tmpfs
Oct 8 15:16:25.111: INFO: Waiting up to 5m0s for pod pod-36b2d10b-6e0a-11e5-bcd2-28d244b00276 status to be success or failure
Oct 8 15:16:25.112: INFO: No Status.Info for container 'test-container' in pod 'pod-36b2d10b-6e0a-11e5-bcd2-28d244b00276' yet
Oct 8 15:16:25.112: INFO: Waiting for pod pod-36b2d10b-6e0a-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-4wslb' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.421765ms elapsed)
Oct 8 15:16:27.114: INFO: No Status.Info for container 'test-container' in pod 'pod-36b2d10b-6e0a-11e5-bcd2-28d244b00276' yet
Oct 8 15:16:27.114: INFO: Waiting for pod pod-36b2d10b-6e0a-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-4wslb' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.002944969s elapsed)
Oct 8 15:16:29.115: INFO: No Status.Info for container 'test-container' in pod 'pod-36b2d10b-6e0a-11e5-bcd2-28d244b00276' yet
Oct 8 15:16:29.115: INFO: Waiting for pod pod-36b2d10b-6e0a-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-4wslb' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.004563293s elapsed)
Oct 8 15:16:31.117: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-36b2d10b-6e0a-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-4wslb' so far
Oct 8 15:16:31.117: INFO: Waiting for pod pod-36b2d10b-6e0a-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-4wslb' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.006326534s elapsed)
Oct 8 15:16:33.119: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-36b2d10b-6e0a-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-4wslb' so far
Oct 8 15:16:33.119: INFO: Waiting for pod pod-36b2d10b-6e0a-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-4wslb' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.008184793s elapsed)
Oct 8 15:16:35.121: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-36b2d10b-6e0a-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-4wslb' so far
Oct 8 15:16:35.121: INFO: Waiting for pod pod-36b2d10b-6e0a-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-4wslb' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.010284797s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-36b2d10b-6e0a-11e5-bcd2-28d244b00276 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rwxrwxrwx
[AfterEach] EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 15:16:37.182: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:16:37.183: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:16:37.183: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-4wslb" for this suite.
• [SLOW TEST:19.125 seconds]
EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (non-root,0777,tmpfs)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:66
------------------------------
Services
should work after restarting kube-proxy
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:305
[BeforeEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:16:42.229: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-ys9nk
Oct 8 15:16:42.231: INFO: Get service account default in ns e2e-tests-services-ys9nk failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:16:44.232: INFO: Service account default in ns e2e-tests-services-ys9nk with secrets found. (2.002443581s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:16:44.232: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-ys9nk
Oct 8 15:16:44.233: INFO: Service account default in ns e2e-tests-services-ys9nk with secrets found. (983.499µs)
[BeforeEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:54
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
[It] should work after restarting kube-proxy
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:305
[AfterEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 15:16:44.236: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:16:44.238: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:16:44.238: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-services-ys9nk" for this suite.
[AfterEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:64
S [SKIPPING] [7.017 seconds]
Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:861
should work after restarting kube-proxy [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:305
Oct 8 15:16:44.233: Only supported for providers [gce gke] (not local)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:211
------------------------------
S
------------------------------
Services
should serve a basic endpoint from pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:130
[BeforeEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:16:49.246: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-wkewy
Oct 8 15:16:49.247: INFO: Get service account default in ns e2e-tests-services-wkewy failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:16:51.248: INFO: Service account default in ns e2e-tests-services-wkewy with secrets found. (2.002142706s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:16:51.248: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-wkewy
Oct 8 15:16:51.249: INFO: Service account default in ns e2e-tests-services-wkewy with secrets found. (928.641µs)
[BeforeEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:54
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
[It] should serve a basic endpoint from pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:130
STEP: creating service endpoint-test2 in namespace e2e-tests-services-wkewy
STEP: waiting up to 1m0s for service endpoint-test2 in namespace e2e-tests-services-wkewy to expose endpoints map[]
Oct 8 15:16:51.289: INFO: Get endpoints failed (890.535µs elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Oct 8 15:16:52.290: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-wkewy exposes endpoints map[] (1.002151233s elapsed)
STEP: creating pod pod1 in namespace e2e-tests-services-wkewy
STEP: waiting up to 1m0s for service endpoint-test2 in namespace e2e-tests-services-wkewy to expose endpoints map[pod1:[80]]
Oct 8 15:16:56.306: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.013968977s elapsed, will retry)
Oct 8 15:17:01.325: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (9.032649632s elapsed, will retry)
Oct 8 15:17:03.333: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-wkewy exposes endpoints map[pod1:[80]] (11.040842665s elapsed)
STEP: creating pod pod2 in namespace e2e-tests-services-wkewy
STEP: waiting up to 1m0s for service endpoint-test2 in namespace e2e-tests-services-wkewy to expose endpoints map[pod1:[80] pod2:[80]]
Oct 8 15:17:07.359: INFO: Unexpected endpoints: found map[46e689e8-6e0a-11e5-956c-28d244b00276:[80]], expected map[pod1:[80] pod2:[80]] (4.02291067s elapsed, will retry)
Oct 8 15:17:12.383: INFO: Unexpected endpoints: found map[46e689e8-6e0a-11e5-956c-28d244b00276:[80]], expected map[pod1:[80] pod2:[80]] (9.047420089s elapsed, will retry)
Oct 8 15:17:14.393: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-wkewy exposes endpoints map[pod1:[80] pod2:[80]] (11.057548393s elapsed)
STEP: deleting pod pod1 in namespace e2e-tests-services-wkewy
STEP: waiting up to 1m0s for service endpoint-test2 in namespace e2e-tests-services-wkewy to expose endpoints map[pod2:[80]]
Oct 8 15:17:15.411: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-wkewy exposes endpoints map[pod2:[80]] (1.014335203s elapsed)
STEP: deleting pod pod2 in namespace e2e-tests-services-wkewy
STEP: waiting up to 1m0s for service endpoint-test2 in namespace e2e-tests-services-wkewy to expose endpoints map[]
Oct 8 15:17:16.420: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-wkewy exposes endpoints map[] (1.004466114s elapsed)
[AfterEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 15:17:16.501: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:17:16.502: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:17:16.502: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-services-wkewy" for this suite.
[AfterEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:64
• [SLOW TEST:32.300 seconds]
Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:861
should serve a basic endpoint from pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:130
------------------------------
Reboot
each node by switching off the network interface and ensure they function upon switch on
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/reboot.go:83
[BeforeEach] Reboot
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/reboot.go:59
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
S [SKIPPING] in Spec Setup (BeforeEach) [0.007 seconds]
Reboot
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/reboot.go:100
each node by switching off the network interface and ensure they function upon switch on [BeforeEach]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/reboot.go:83
Oct 8 15:17:21.545: Only supported for providers [gce gke aws] (not local)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:211
------------------------------
Kubectl client Kubectl describe
should check if kubectl describe prints relevant information for rc and pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:333
[BeforeEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:77
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
Oct 8 15:17:21.554: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-7wjwq
Oct 8 15:17:21.555: INFO: Get service account default in ns e2e-tests-kubectl-7wjwq failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:17:23.556: INFO: Service account default in ns e2e-tests-kubectl-7wjwq with secrets found. (2.002621499s)
[It] should check if kubectl describe prints relevant information for rc and pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:333
Oct 8 15:17:23.556: INFO: Running '/home/yifan/google-cloud-sdk/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth create -f /home/yifan/gopher/src/github.com/coreos/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-7wjwq'
[AfterEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:84
STEP: Destroying namespace for this suite e2e-tests-kubectl-7wjwq
• Failure [7.024 seconds]
Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:682
Kubectl describe
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:334
should check if kubectl describe prints relevant information for rc and pods [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:333
Oct 8 15:17:23.564: Error running &{/home/yifan/google-cloud-sdk/bin/kubectl [kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth create -f /home/yifan/gopher/src/github.com/coreos/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-7wjwq] [] <nil> the path "/home/yifan/gopher/src/github.com/coreos/kubernetes/examples/guestbook-go/redis-master-controller.json" does not exist
[] <nil> 0xc20898ab00 exit status 1 <nil> true [0xc20824c220 0xc20824c2c8 0xc20824c338] [0xc20824c220 0xc20824c2c8 0xc20824c338] [0xc20824c278 0xc20824c310] [0x6bd870 0x6bd870] 0xc208995560}:
Command stdout:
stderr:
the path "/home/yifan/gopher/src/github.com/coreos/kubernetes/examples/guestbook-go/redis-master-controller.json" does not exist
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1046
------------------------------
Networking
should function for intra-pod communication
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:247
[BeforeEach] Networking
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:17:28.578: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nettest-3xvfw
Oct 8 15:17:28.579: INFO: Get service account default in ns e2e-tests-nettest-3xvfw failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:17:30.580: INFO: Service account default in ns e2e-tests-nettest-3xvfw with secrets found. (2.002318988s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:17:30.580: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nettest-3xvfw
Oct 8 15:17:30.581: INFO: Service account default in ns e2e-tests-nettest-3xvfw with secrets found. (938.33µs)
[BeforeEach] Networking
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:52
STEP: Executing a successful http request from the external internet
[It] should function for intra-pod communication
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:247
STEP: Creating a service named "nettest" in namespace "e2e-tests-nettest-3xvfw"
STEP: Creating a webserver (pending) pod on each node
Oct 8 15:17:30.812: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:17:30.812: INFO: Successfully found node 127.0.0.1 readiness to be true
Oct 8 15:17:30.812: INFO: Only one ready node is detected. The test has limited scope in such setting. Rerun it with at least two nodes to get complete coverage.
Oct 8 15:17:30.845: INFO: Created pod nettest-5mye3 on node 127.0.0.1
STEP: Waiting for the webserver pods to transition to Running state
Oct 8 15:17:30.845: INFO: Waiting up to 5m0s for pod nettest-5mye3 status to be running
Oct 8 15:17:30.848: INFO: Waiting for pod nettest-5mye3 in namespace 'e2e-tests-nettest-3xvfw' status to be 'running'(found phase: "Pending", readiness: false) (2.808762ms elapsed)
Oct 8 15:17:32.849: INFO: Waiting for pod nettest-5mye3 in namespace 'e2e-tests-nettest-3xvfw' status to be 'running'(found phase: "Pending", readiness: false) (2.004340491s elapsed)
Oct 8 15:17:34.851: INFO: Waiting for pod nettest-5mye3 in namespace 'e2e-tests-nettest-3xvfw' status to be 'running'(found phase: "Pending", readiness: false) (4.005899087s elapsed)
Oct 8 15:17:36.853: INFO: Waiting for pod nettest-5mye3 in namespace 'e2e-tests-nettest-3xvfw' status to be 'running'(found phase: "Pending", readiness: false) (6.008221197s elapsed)
Oct 8 15:17:38.855: INFO: Waiting for pod nettest-5mye3 in namespace 'e2e-tests-nettest-3xvfw' status to be 'running'(found phase: "Pending", readiness: false) (8.010333975s elapsed)
Oct 8 15:17:40.857: INFO: Waiting for pod nettest-5mye3 in namespace 'e2e-tests-nettest-3xvfw' status to be 'running'(found phase: "Pending", readiness: false) (10.012327419s elapsed)
Oct 8 15:17:42.859: INFO: Found pod 'nettest-5mye3' on node '127.0.0.1'
STEP: Waiting for connectivity to be verified
Oct 8 15:17:44.859: INFO: About to make a proxy status call
Oct 8 15:17:44.861: INFO: Proxy status call returned in 1.608739ms
Oct 8 15:17:44.861: INFO: Attempt 0: test still running
Oct 8 15:17:46.861: INFO: About to make a proxy status call
Oct 8 15:17:46.863: INFO: Proxy status call returned in 1.960687ms
Oct 8 15:17:46.863: INFO: Passed on attempt 1. Cleaning up.
STEP: Cleaning up the webserver pods
STEP: Cleaning up the service
[AfterEach] Networking
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 15:17:47.061: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:17:47.063: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:17:47.063: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-nettest-3xvfw" for this suite.
• [SLOW TEST:23.540 seconds]
Networking
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:249
should function for intra-pod communication
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:247
------------------------------
Deployment
deployment should delete old pods and create new ones
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/deployment.go:37
[BeforeEach] Deployment
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:17:52.119: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-deployment-bs98n
Oct 8 15:17:52.120: INFO: Get service account default in ns e2e-tests-deployment-bs98n failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:17:54.122: INFO: Service account default in ns e2e-tests-deployment-bs98n with secrets found. (2.003130156s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:17:54.122: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-deployment-bs98n
Oct 8 15:17:54.123: INFO: Service account default in ns e2e-tests-deployment-bs98n with secrets found. (935.995µs)
[It] deployment should delete old pods and create new ones
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/deployment.go:37
Oct 8 15:17:54.127: INFO: Pod name sample-pod: Found 0 pods out of 3
Oct 8 15:17:59.130: INFO: Pod name sample-pod: Found 3 pods out of 3
STEP: ensuring each pod is running
Oct 8 15:17:59.130: INFO: Waiting up to 5m0s for pod nginx-controller-pyqwj status to be running
Oct 8 15:17:59.132: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1.909528ms elapsed)
Oct 8 15:18:01.135: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (2.004955854s elapsed)
Oct 8 15:18:03.137: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (4.006580446s elapsed)
Oct 8 15:18:05.139: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (6.009118137s elapsed)
Oct 8 15:18:07.141: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (8.010833749s elapsed)
Oct 8 15:18:09.143: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (10.012481441s elapsed)
Oct 8 15:18:11.144: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (12.014142873s elapsed)
Oct 8 15:18:13.146: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (14.015948057s elapsed)
Oct 8 15:18:15.148: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (16.017618907s elapsed)
Oct 8 15:18:17.150: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (18.019556027s elapsed)
Oct 8 15:18:19.151: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (20.021273725s elapsed)
Oct 8 15:18:21.153: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (22.023041298s elapsed)
Oct 8 15:18:23.155: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (24.024742304s elapsed)
Oct 8 15:18:25.157: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (26.026564883s elapsed)
Oct 8 15:18:27.159: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (28.028333884s elapsed)
Oct 8 15:18:29.161: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (30.03082799s elapsed)
Oct 8 15:18:31.163: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (32.032520679s elapsed)
Oct 8 15:18:33.164: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (34.034197276s elapsed)
Oct 8 15:18:35.166: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (36.035931085s elapsed)
Oct 8 15:18:37.168: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (38.037701843s elapsed)
Oct 8 15:18:39.170: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (40.039324373s elapsed)
Oct 8 15:18:41.171: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (42.040994628s elapsed)
Oct 8 15:18:43.177: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (44.04666663s elapsed)
Oct 8 15:18:45.179: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (46.048334473s elapsed)
Oct 8 15:18:47.180: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (48.050026611s elapsed)
Oct 8 15:18:49.182: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (50.052266663s elapsed)
Oct 8 15:18:51.184: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (52.053943291s elapsed)
Oct 8 15:18:53.186: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (54.055741646s elapsed)
Oct 8 15:18:55.188: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (56.057705772s elapsed)
Oct 8 15:18:57.190: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (58.059413167s elapsed)
Oct 8 15:18:59.192: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1m0.06140151s elapsed)
Oct 8 15:19:01.194: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1m2.06405187s elapsed)
Oct 8 15:19:03.196: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1m4.065817656s elapsed)
Oct 8 15:19:05.198: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1m6.067557671s elapsed)
Oct 8 15:19:07.200: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1m8.070049459s elapsed)
Oct 8 15:19:09.202: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1m10.071778931s elapsed)
Oct 8 15:19:11.204: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1m12.073931904s elapsed)
Oct 8 15:19:13.206: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1m14.075609154s elapsed)
Oct 8 15:19:15.208: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1m16.077763767s elapsed)
Oct 8 15:19:17.210: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1m18.07938892s elapsed)
Oct 8 15:19:19.211: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1m20.081162265s elapsed)
Oct 8 15:19:21.219: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1m22.08832777s elapsed)
Oct 8 15:19:23.222: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1m24.091752622s elapsed)
Oct 8 15:19:25.224: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1m26.093792943s elapsed)
Oct 8 15:19:27.225: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1m28.095254359s elapsed)
Oct 8 15:19:29.227: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1m30.097082633s elapsed)
Oct 8 15:19:31.232: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1m32.101490362s elapsed)
Oct 8 15:19:33.234: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1m34.103958968s elapsed)
Oct 8 15:19:35.236: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1m36.10558555s elapsed)
Oct 8 15:19:37.243: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1m38.11320527s elapsed)
Oct 8 15:19:39.251: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1m40.120411657s elapsed)
Oct 8 15:19:41.253: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1m42.122525736s elapsed)
Oct 8 15:19:43.255: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1m44.125048816s elapsed)
Oct 8 15:19:45.257: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1m46.127186175s elapsed)
Oct 8 15:19:47.259: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1m48.129226428s elapsed)
Oct 8 15:19:49.261: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1m50.13121541s elapsed)
Oct 8 15:19:51.265: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1m52.135257626s elapsed)
Oct 8 15:19:53.269: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1m54.138530142s elapsed)
Oct 8 15:19:55.273: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1m56.142474758s elapsed)
Oct 8 15:19:57.277: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1m58.146533426s elapsed)
Oct 8 15:19:59.283: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (2m0.152870961s elapsed)
Oct 8 15:20:01.287: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (2m2.157097695s elapsed)
Oct 8 15:20:03.289: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (2m4.158852452s elapsed)
Oct 8 15:20:05.297: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (2m6.16694085s elapsed)
Oct 8 15:20:07.301: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (2m8.170337377s elapsed)
Oct 8 15:20:09.303: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (2m10.172415619s elapsed)
Oct 8 15:20:11.306: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (2m12.175695753s elapsed)
Oct 8 15:20:13.307: INFO: Waiting for pod nginx-controller-pyqwj in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (2m14.177253127s elapsed)
Oct 8 15:20:15.309: INFO: Found pod 'nginx-controller-pyqwj' on node '127.0.0.1'
Oct 8 15:20:15.309: INFO: Waiting up to 5m0s for pod nginx-controller-uwass status to be running
Oct 8 15:20:15.311: INFO: Found pod 'nginx-controller-uwass' on node '127.0.0.1'
Oct 8 15:20:15.311: INFO: Waiting up to 5m0s for pod nginx-controller-zwck7 status to be running
Oct 8 15:20:15.312: INFO: Waiting for pod nginx-controller-zwck7 in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (1.370869ms elapsed)
Oct 8 15:20:17.314: INFO: Waiting for pod nginx-controller-zwck7 in namespace 'e2e-tests-deployment-bs98n' status to be 'running'(found phase: "Pending", readiness: false) (2.003076609s elapsed)
Oct 8 15:20:19.316: INFO: Found pod 'nginx-controller-zwck7' on node '127.0.0.1'
STEP: trying to dial each unique pod
Oct 8 15:20:19.320: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:20:19.323: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:20:19.325: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:20:21.330: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:20:21.331: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:20:21.333: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:20:23.329: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:20:23.331: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:20:23.333: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:20:25.329: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:20:25.331: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:20:25.333: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:20:27.330: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:20:27.332: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:20:27.334: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:20:29.330: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:20:29.332: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:20:29.334: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:20:31.330: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:20:31.332: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:20:31.334: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:20:33.329: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:20:33.331: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:20:33.334: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:20:35.330: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:20:35.332: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:20:35.335: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:20:37.329: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:20:37.331: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:20:37.333: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:20:39.332: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:20:39.335: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:20:39.339: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:20:41.330: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:20:41.332: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:20:41.334: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:20:43.330: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:20:43.332: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:20:43.334: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:20:45.337: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:20:45.340: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:20:45.344: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:20:47.329: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:20:47.331: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:20:47.332: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:20:49.330: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:20:49.332: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:20:49.334: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:20:51.330: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:20:51.331: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:20:51.333: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:20:53.330: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:20:53.331: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:20:53.333: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:20:55.331: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:20:55.333: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:20:55.335: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:20:57.329: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:20:57.331: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:20:57.333: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:20:59.329: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:20:59.331: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:20:59.333: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:21:01.329: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:21:01.331: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:21:01.333: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:21:03.335: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:21:03.341: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:21:03.346: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:21:05.329: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:21:05.331: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:21:05.335: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:21:07.329: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:21:07.332: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:21:07.334: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:21:09.330: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:21:09.332: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:21:09.334: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:21:11.330: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:21:11.332: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:21:11.334: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:21:13.331: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:21:13.333: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:21:13.335: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:21:15.329: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:21:15.331: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:21:15.333: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:21:17.329: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:21:17.331: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:21:17.333: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:21:19.329: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:21:19.331: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:21:19.333: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:21:21.331: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:21:21.332: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:21:21.334: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:21:23.330: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:21:23.332: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:21:23.334: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:21:25.329: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:21:25.331: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:21:25.333: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:21:27.331: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:21:27.334: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:21:27.338: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:21:29.330: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:21:29.332: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:21:29.334: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:21:31.330: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:21:31.332: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:21:31.334: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:21:33.335: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:21:33.337: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:21:33.339: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:21:35.332: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:21:35.334: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:21:35.337: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:21:37.330: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:21:37.332: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:21:37.334: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:21:39.331: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:21:39.332: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:21:39.334: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:21:41.330: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:21:41.333: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:21:41.335: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:21:43.330: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:21:43.332: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:21:43.334: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:21:45.332: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:21:45.341: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:21:45.345: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:21:47.336: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:21:47.339: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:21:47.342: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:21:49.329: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:21:49.331: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:21:49.333: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:21:51.330: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:21:51.332: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:21:51.334: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:21:53.332: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:21:53.336: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:21:53.338: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:21:55.329: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:21:55.331: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:21:55.333: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:21:57.329: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:21:57.331: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:21:57.333: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:21:59.329: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:21:59.331: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:21:59.333: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:22:01.329: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:22:01.331: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:22:01.333: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:22:03.330: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:22:03.332: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:22:03.334: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:22:05.330: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:22:05.332: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:22:05.333: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:22:07.331: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:22:07.333: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:22:07.335: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:22:09.331: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:22:09.334: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:22:09.335: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:22:11.330: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:22:11.332: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:22:11.336: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:22:13.330: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:22:13.332: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:22:13.334: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:22:15.329: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:22:15.331: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:22:15.333: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:22:17.330: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:22:17.331: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:22:17.333: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:22:19.329: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-pyqwj]: the server does not allow access to the requested resource (get pods nginx-controller-pyqwj):
Oct 8 15:22:19.331: INFO: Controller sample-pod: Failed to GET from replica 2 [nginx-controller-uwass]: the server does not allow access to the requested resource (get pods nginx-controller-uwass):
Oct 8 15:22:19.333: INFO: Controller sample-pod: Failed to GET from replica 3 [nginx-controller-zwck7]: the server does not allow access to the requested resource (get pods nginx-controller-zwck7):
Oct 8 15:22:19.333: INFO: error in waiting for pods to come up: failed to wait for pods responding: timed out waiting for the condition
Oct 8 15:22:19.336: INFO: deleting replication controller nginx-controller
[AfterEach] Deployment
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-deployment-bs98n".
Oct 8 15:22:19.346: INFO: event for nginx-controller-pyqwj: {scheduler } Scheduled: Successfully assigned nginx-controller-pyqwj to 127.0.0.1
Oct 8 15:22:19.346: INFO: event for nginx-controller-pyqwj: {kubelet 127.0.0.1} Pulling: Pulling image "nginx"
Oct 8 15:22:19.346: INFO: event for nginx-controller-pyqwj: {kubelet 127.0.0.1} Pulled: Successfully pulled image "nginx"
Oct 8 15:22:19.346: INFO: event for nginx-controller-pyqwj: {kubelet 127.0.0.1} Created: Created with rkt id bfb30173
Oct 8 15:22:19.346: INFO: event for nginx-controller-pyqwj: {kubelet 127.0.0.1} Started: Started with rkt id bfb30173
Oct 8 15:22:19.346: INFO: event for nginx-controller-uwass: {scheduler } Scheduled: Successfully assigned nginx-controller-uwass to 127.0.0.1
Oct 8 15:22:19.346: INFO: event for nginx-controller-uwass: {kubelet 127.0.0.1} Pulling: Pulling image "nginx"
Oct 8 15:22:19.346: INFO: event for nginx-controller-uwass: {kubelet 127.0.0.1} Pulled: Successfully pulled image "nginx"
Oct 8 15:22:19.346: INFO: event for nginx-controller-uwass: {kubelet 127.0.0.1} Created: Created with rkt id f0b0356a
Oct 8 15:22:19.346: INFO: event for nginx-controller-uwass: {kubelet 127.0.0.1} Started: Started with rkt id f0b0356a
Oct 8 15:22:19.346: INFO: event for nginx-controller-zwck7: {scheduler } Scheduled: Successfully assigned nginx-controller-zwck7 to 127.0.0.1
Oct 8 15:22:19.346: INFO: event for nginx-controller-zwck7: {kubelet 127.0.0.1} Pulling: Pulling image "nginx"
Oct 8 15:22:19.346: INFO: event for nginx-controller-zwck7: {kubelet 127.0.0.1} Pulled: Successfully pulled image "nginx"
Oct 8 15:22:19.346: INFO: event for nginx-controller-zwck7: {kubelet 127.0.0.1} Created: Created with rkt id e755476f
Oct 8 15:22:19.346: INFO: event for nginx-controller-zwck7: {kubelet 127.0.0.1} Started: Started with rkt id e755476f
Oct 8 15:22:19.346: INFO: event for nginx-controller: {replication-controller } SuccessfulCreate: Created pod: nginx-controller-uwass
Oct 8 15:22:19.346: INFO: event for nginx-controller: {replication-controller } SuccessfulCreate: Created pod: nginx-controller-zwck7
Oct 8 15:22:19.346: INFO: event for nginx-controller: {replication-controller } SuccessfulCreate: Created pod: nginx-controller-pyqwj
Oct 8 15:22:19.349: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 8 15:22:19.349: INFO: nginx-controller-pyqwj 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 15:20:13 -0700 PDT }]
Oct 8 15:22:19.349: INFO: nginx-controller-uwass 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 15:20:13 -0700 PDT }]
Oct 8 15:22:19.349: INFO: nginx-controller-zwck7 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 15:20:18 -0700 PDT }]
Oct 8 15:22:19.349: INFO:
Oct 8 15:22:19.349: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:22:19.350: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:22:19.350: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-deployment-bs98n" for this suite.
• Failure [272.271 seconds]
Deployment
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/deployment.go:41
deployment should delete old pods and create new ones [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/deployment.go:37
Expected error:
<*errors.errorString | 0xc208addb20>: {
s: "failed to wait for pods responding: timed out waiting for the condition",
}
failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/deployment.go:136
------------------------------
Nodes Resize
should be able to add nodes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:472
[BeforeEach] Nodes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:395
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:22:24.394: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-resize-nodes-ej07z
Oct 8 15:22:24.395: INFO: Get service account default in ns e2e-tests-resize-nodes-ej07z failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:22:26.397: INFO: Service account default in ns e2e-tests-resize-nodes-ej07z with secrets found. (2.003354825s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:22:26.397: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-resize-nodes-ej07z
Oct 8 15:22:26.399: INFO: Service account default in ns e2e-tests-resize-nodes-ej07z with secrets found. (1.540463ms)
[BeforeEach] Resize
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:407
[AfterEach] Resize
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:424
[AfterEach] Nodes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:397
Oct 8 15:22:26.403: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:22:26.405: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:22:26.405: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-resize-nodes-ej07z" for this suite.
S [SKIPPING] in Spec Setup (BeforeEach) [7.026 seconds]
Nodes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:539
Resize
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:473
should be able to add nodes [BeforeEach]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:472
Oct 8 15:22:26.399: Only supported for providers [gce gke aws] (not local)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:211
------------------------------
DNS
should provide DNS for services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/dns.go:307
[BeforeEach] DNS
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:22:31.416: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-dns-g1tum
Oct 8 15:22:31.417: INFO: Get service account default in ns e2e-tests-dns-g1tum failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:22:33.419: INFO: Service account default in ns e2e-tests-dns-g1tum with secrets found. (2.002360676s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:22:33.419: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-dns-g1tum
Oct 8 15:22:33.420: INFO: Service account default in ns e2e-tests-dns-g1tum with secrets found. (946.696µs)
[It] should provide DNS for services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/dns.go:307
STEP: Waiting for DNS Service to be Running
[AfterEach] DNS
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-dns-g1tum".
Oct 8 15:22:33.425: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 8 15:22:33.425: INFO:
Oct 8 15:22:33.425: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:22:33.427: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:22:33.427: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-dns-g1tum" for this suite.
• Failure [7.023 seconds]
DNS
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/dns.go:309
should provide DNS for services [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/dns.go:307
Oct 8 15:22:33.420: Unexpected number of pods (0) matches the label selector k8s-app=kube-dns,kubernetes.io/cluster-service=true
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/dns.go:237
------------------------------
S
------------------------------
Docker Containers
should be able to override the image's default arguments (docker cmd)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:62
[BeforeEach] Docker Containers
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:41
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
Oct 8 15:22:38.442: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-containers-jxazz
Oct 8 15:22:38.444: INFO: Get service account default in ns e2e-tests-containers-jxazz failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:22:40.445: INFO: Service account default in ns e2e-tests-containers-jxazz with secrets found. (2.002365615s)
[It] should be able to override the image's default arguments (docker cmd)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:62
STEP: Creating a pod to test override arguments
Oct 8 15:22:40.448: INFO: Waiting up to 5m0s for pod client-containers-166ab533-6e0b-11e5-bcd2-28d244b00276 status to be success or failure
Oct 8 15:22:40.449: INFO: No Status.Info for container 'test-container' in pod 'client-containers-166ab533-6e0b-11e5-bcd2-28d244b00276' yet
Oct 8 15:22:40.449: INFO: Waiting for pod client-containers-166ab533-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-containers-jxazz' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.293414ms elapsed)
Oct 8 15:22:42.451: INFO: No Status.Info for container 'test-container' in pod 'client-containers-166ab533-6e0b-11e5-bcd2-28d244b00276' yet
Oct 8 15:22:42.451: INFO: Waiting for pod client-containers-166ab533-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-containers-jxazz' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.002914893s elapsed)
Oct 8 15:22:44.452: INFO: No Status.Info for container 'test-container' in pod 'client-containers-166ab533-6e0b-11e5-bcd2-28d244b00276' yet
Oct 8 15:22:44.452: INFO: Waiting for pod client-containers-166ab533-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-containers-jxazz' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.004359912s elapsed)
Oct 8 15:22:46.454: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-166ab533-6e0b-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-containers-jxazz' so far
Oct 8 15:22:46.454: INFO: Waiting for pod client-containers-166ab533-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-containers-jxazz' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.006195026s elapsed)
Oct 8 15:22:48.457: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-166ab533-6e0b-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-containers-jxazz' so far
Oct 8 15:22:48.457: INFO: Waiting for pod client-containers-166ab533-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-containers-jxazz' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.008775219s elapsed)
Oct 8 15:22:50.459: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-166ab533-6e0b-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-containers-jxazz' so far
Oct 8 15:22:50.459: INFO: Waiting for pod client-containers-166ab533-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-containers-jxazz' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.01078391s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod client-containers-166ab533-6e0b-11e5-bcd2-28d244b00276 container test-container: <nil>
STEP: Successfully fetched pod logs:[/ep default arguments override arguments]
[AfterEach] Docker Containers
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:47
• Failure [19.127 seconds]
Docker Containers
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:84
should be able to override the image's default arguments (docker cmd) [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:62
"[/ep override arguments]" in container output
Expected
<string>: [/ep default arguments override arguments]
to contain substring
<string>: [/ep override arguments]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1139
------------------------------
S
------------------------------
Etcd failure
should recover from SIGKILL
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/etcd_failure.go:79
[BeforeEach] Etcd failure
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/etcd_failure.go:55
[AfterEach] Etcd failure
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/etcd_failure.go:63
S [SKIPPING] in Spec Setup (BeforeEach) [0.005 seconds]
Etcd failure
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/etcd_failure.go:80
should recover from SIGKILL [BeforeEach]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/etcd_failure.go:79
Oct 8 15:22:57.564: Only supported for providers [gce] (not local)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:211
------------------------------
Kubectl client Update Demo
should do a rolling update of a replication controller
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:123
[BeforeEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:77
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
Oct 8 15:22:57.572: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-1jpxi
Oct 8 15:22:57.572: INFO: Get service account default in ns e2e-tests-kubectl-1jpxi failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:22:59.574: INFO: Service account default in ns e2e-tests-kubectl-1jpxi with secrets found. (2.002232946s)
[BeforeEach] Update Demo
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:92
[It] should do a rolling update of a replication controller
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:123
STEP: creating the initial replication controller
Oct 8 15:22:59.574: INFO: Running '/home/yifan/google-cloud-sdk/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth create -f /home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-1jpxi'
[AfterEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:84
STEP: Destroying namespace for this suite e2e-tests-kubectl-1jpxi
• Failure [7.024 seconds]
Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:682
Update Demo
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:124
should do a rolling update of a replication controller [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:123
Oct 8 15:22:59.583: Error running &{/home/yifan/google-cloud-sdk/bin/kubectl [kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth create -f /home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-1jpxi] [] <nil> the path "/home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml" does not exist
[] <nil> 0xc20828dc40 exit status 1 <nil> true [0xc20804ecc8 0xc20804ed00 0xc20804ed20] [0xc20804ecc8 0xc20804ed00 0xc20804ed20] [0xc20804ece8 0xc20804ed18] [0x6bd870 0x6bd870] 0xc20859e300}:
Command stdout:
stderr:
the path "/home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml" does not exist
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1046
------------------------------
Reboot
each node by dropping all outbound packets for a while and ensure they function afterwards
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/reboot.go:99
[BeforeEach] Reboot
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/reboot.go:59
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
S [SKIPPING] in Spec Setup (BeforeEach) [0.005 seconds]
Reboot
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/reboot.go:100
each node by dropping all outbound packets for a while and ensure they function afterwards [BeforeEach]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/reboot.go:99
Oct 8 15:23:04.593: Only supported for providers [gce gke aws] (not local)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:211
------------------------------
Variable Expansion
should allow substituting values in a container's command
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:97
[BeforeEach] Variable Expansion
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:23:04.602: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-var-expansion-5g1xj
Oct 8 15:23:04.604: INFO: Get service account default in ns e2e-tests-var-expansion-5g1xj failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:23:06.607: INFO: Service account default in ns e2e-tests-var-expansion-5g1xj with secrets found. (2.004424121s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:23:06.607: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-var-expansion-5g1xj
Oct 8 15:23:06.608: INFO: Service account default in ns e2e-tests-var-expansion-5g1xj with secrets found. (1.631588ms)
[It] should allow substituting values in a container's command
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:97
STEP: Creating a pod to test substitution in container's command
Oct 8 15:23:06.614: INFO: Waiting up to 5m0s for pod var-expansion-2602ea61-6e0b-11e5-bcd2-28d244b00276 status to be success or failure
Oct 8 15:23:06.616: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-2602ea61-6e0b-11e5-bcd2-28d244b00276' yet
Oct 8 15:23:06.616: INFO: Waiting for pod var-expansion-2602ea61-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-var-expansion-5g1xj' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.718971ms elapsed)
Oct 8 15:23:08.618: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-2602ea61-6e0b-11e5-bcd2-28d244b00276' yet
Oct 8 15:23:08.618: INFO: Waiting for pod var-expansion-2602ea61-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-var-expansion-5g1xj' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.004699697s elapsed)
Oct 8 15:23:10.620: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-2602ea61-6e0b-11e5-bcd2-28d244b00276' yet
Oct 8 15:23:10.620: INFO: Waiting for pod var-expansion-2602ea61-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-var-expansion-5g1xj' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.006343763s elapsed)
Oct 8 15:23:12.622: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-2602ea61-6e0b-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-var-expansion-5g1xj' so far
Oct 8 15:23:12.622: INFO: Waiting for pod var-expansion-2602ea61-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-var-expansion-5g1xj' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.008408603s elapsed)
Oct 8 15:23:14.624: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-2602ea61-6e0b-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-var-expansion-5g1xj' so far
Oct 8 15:23:14.624: INFO: Waiting for pod var-expansion-2602ea61-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-var-expansion-5g1xj' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.010429516s elapsed)
Oct 8 15:23:16.626: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-2602ea61-6e0b-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-var-expansion-5g1xj' so far
Oct 8 15:23:16.626: INFO: Waiting for pod var-expansion-2602ea61-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-var-expansion-5g1xj' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.012330097s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod var-expansion-2602ea61-6e0b-11e5-bcd2-28d244b00276 container dapi-container: <nil>
STEP: Successfully fetched pod logs:
[AfterEach] Variable Expansion
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-var-expansion-5g1xj".
Oct 8 15:23:18.701: INFO: event for var-expansion-2602ea61-6e0b-11e5-bcd2-28d244b00276: {scheduler } Scheduled: Successfully assigned var-expansion-2602ea61-6e0b-11e5-bcd2-28d244b00276 to 127.0.0.1
Oct 8 15:23:18.701: INFO: event for var-expansion-2602ea61-6e0b-11e5-bcd2-28d244b00276: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/busybox" already present on machine
Oct 8 15:23:18.701: INFO: event for var-expansion-2602ea61-6e0b-11e5-bcd2-28d244b00276: {kubelet 127.0.0.1} Created: Created with rkt id 968476a4
Oct 8 15:23:18.701: INFO: event for var-expansion-2602ea61-6e0b-11e5-bcd2-28d244b00276: {kubelet 127.0.0.1} Started: Started with rkt id 968476a4
Oct 8 15:23:18.703: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 8 15:23:18.703: INFO:
Oct 8 15:23:18.703: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:23:18.704: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:23:18.704: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-var-expansion-5g1xj" for this suite.
• Failure [19.137 seconds]
Variable Expansion
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:129
should allow substituting values in a container's command [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:97
"test-value" in container output
Expected
<string>:
to contain substring
<string>: test-value
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1139
------------------------------
S
------------------------------
PrivilegedPod
should test privileged pod
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/privileged.go:73
[BeforeEach] PrivilegedPod
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:23:23.739: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-e2e-privilegedpod-9si9a
Oct 8 15:23:23.740: INFO: Get service account default in ns e2e-tests-e2e-privilegedpod-9si9a failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:23:25.741: INFO: Service account default in ns e2e-tests-e2e-privilegedpod-9si9a with secrets found. (2.002128163s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:23:25.741: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-e2e-privilegedpod-9si9a
Oct 8 15:23:25.742: INFO: Service account default in ns e2e-tests-e2e-privilegedpod-9si9a with secrets found. (904.302µs)
[It] should test privileged pod
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/privileged.go:73
[AfterEach] PrivilegedPod
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 15:23:25.747: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:23:25.749: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:23:25.749: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-e2e-privilegedpod-9si9a" for this suite.
S [SKIPPING] [7.018 seconds]
PrivilegedPod
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/privileged.go:74
should test privileged pod [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/privileged.go:73
Oct 8 15:23:25.742: Only supported for providers [gce gke aws] (not local)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:211
------------------------------
S
------------------------------
Kubectl client Update Demo
should create and stop a replication controller
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:99
[BeforeEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:77
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
Oct 8 15:23:30.757: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-lyo69
Oct 8 15:23:30.758: INFO: Get service account default in ns e2e-tests-kubectl-lyo69 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:23:32.761: INFO: Service account default in ns e2e-tests-kubectl-lyo69 with secrets found. (2.003160143s)
[BeforeEach] Update Demo
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:92
[It] should create and stop a replication controller
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:99
STEP: creating a replication controller
Oct 8 15:23:32.761: INFO: Running '/home/yifan/google-cloud-sdk/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth create -f /home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-lyo69'
STEP: using delete to clean up resources
Oct 8 15:23:32.773: INFO: Running '/home/yifan/google-cloud-sdk/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth stop --grace-period=0 -f /home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-lyo69'
[AfterEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:84
STEP: Destroying namespace for this suite e2e-tests-kubectl-lyo69
• Failure [7.044 seconds]
Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:682
Update Demo
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:124
should create and stop a replication controller [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:99
Oct 8 15:23:32.771: Error running &{/home/yifan/google-cloud-sdk/bin/kubectl [kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth create -f /home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-lyo69] [] <nil> the path "/home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml" does not exist
[] <nil> 0xc208944500 exit status 1 <nil> true [0xc20804ede0 0xc20804ee00 0xc20804ee20] [0xc20804ede0 0xc20804ee00 0xc20804ee20] [0xc20804edf8 0xc20804ee18] [0x6bd870 0x6bd870] 0xc20859f980}:
Command stdout:
stderr:
the path "/home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml" does not exist
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1046
------------------------------
EmptyDir volumes
should support (root,0644,tmpfs)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:46
[BeforeEach] EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:23:37.802: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-tmjtn
Oct 8 15:23:37.804: INFO: Get service account default in ns e2e-tests-emptydir-tmjtn failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:23:39.805: INFO: Service account default in ns e2e-tests-emptydir-tmjtn with secrets found. (2.002849864s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:23:39.805: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-tmjtn
Oct 8 15:23:39.806: INFO: Service account default in ns e2e-tests-emptydir-tmjtn with secrets found. (948.722µs)
[It] should support (root,0644,tmpfs)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:46
STEP: Creating a pod to test emptydir 0644 on tmpfs
Oct 8 15:23:39.808: INFO: Waiting up to 5m0s for pod pod-39cc7ff8-6e0b-11e5-bcd2-28d244b00276 status to be success or failure
Oct 8 15:23:39.811: INFO: No Status.Info for container 'test-container' in pod 'pod-39cc7ff8-6e0b-11e5-bcd2-28d244b00276' yet
Oct 8 15:23:39.811: INFO: Waiting for pod pod-39cc7ff8-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-tmjtn' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.829158ms elapsed)
Oct 8 15:23:41.813: INFO: No Status.Info for container 'test-container' in pod 'pod-39cc7ff8-6e0b-11e5-bcd2-28d244b00276' yet
Oct 8 15:23:41.813: INFO: Waiting for pod pod-39cc7ff8-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-tmjtn' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.004435519s elapsed)
Oct 8 15:23:43.815: INFO: No Status.Info for container 'test-container' in pod 'pod-39cc7ff8-6e0b-11e5-bcd2-28d244b00276' yet
Oct 8 15:23:43.815: INFO: Waiting for pod pod-39cc7ff8-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-tmjtn' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.006126241s elapsed)
Oct 8 15:23:45.816: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-39cc7ff8-6e0b-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-tmjtn' so far
Oct 8 15:23:45.816: INFO: Waiting for pod pod-39cc7ff8-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-tmjtn' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.008011479s elapsed)
Oct 8 15:23:47.818: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-39cc7ff8-6e0b-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-tmjtn' so far
Oct 8 15:23:47.818: INFO: Waiting for pod pod-39cc7ff8-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-tmjtn' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.009873575s elapsed)
Oct 8 15:23:49.821: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-39cc7ff8-6e0b-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-tmjtn' so far
Oct 8 15:23:49.821: INFO: Waiting for pod pod-39cc7ff8-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-tmjtn' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.012970687s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-39cc7ff8-6e0b-11e5-bcd2-28d244b00276 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-r--r--
[AfterEach] EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 15:23:51.930: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:23:51.932: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:23:51.932: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-tmjtn" for this suite.
• [SLOW TEST:19.171 seconds]
EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (root,0644,tmpfs)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:46
------------------------------
S
------------------------------
Pods
should be submitted and removed
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:295
[BeforeEach] Pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:23:56.973: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-2mvr1
Oct 8 15:23:56.975: INFO: Get service account default in ns e2e-tests-pods-2mvr1 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:23:58.976: INFO: Service account default in ns e2e-tests-pods-2mvr1 with secrets found. (2.003060358s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:23:58.976: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-2mvr1
Oct 8 15:23:58.977: INFO: Service account default in ns e2e-tests-pods-2mvr1 with secrets found. (940.887µs)
[It] should be submitted and removed
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:295
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying pod deletion was observed
[AfterEach] Pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 15:23:59.110: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:23:59.111: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:23:59.111: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-2mvr1" for this suite.
• [SLOW TEST:7.404 seconds]
Pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:778
should be submitted and removed
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:295
------------------------------
S
------------------------------
Services
should be able to up and down services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:255
[BeforeEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:24:04.421: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-he1el
Oct 8 15:24:04.423: INFO: Get service account default in ns e2e-tests-services-he1el failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:24:06.424: INFO: Service account default in ns e2e-tests-services-he1el with secrets found. (2.002831801s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:24:06.424: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-he1el
Oct 8 15:24:06.425: INFO: Service account default in ns e2e-tests-services-he1el with secrets found. (920.575µs)
[BeforeEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:54
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
[It] should be able to up and down services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:255
[AfterEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 15:24:06.430: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:24:06.432: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:24:06.432: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-services-he1el" for this suite.
[AfterEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:64
S [SKIPPING] [7.064 seconds]
Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:861
should be able to up and down services [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:255
Oct 8 15:24:06.425: Only supported for providers [gce gke aws] (not local)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:211
------------------------------
EmptyDir volumes
should support (non-root,0777,default)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:94
[BeforeEach] EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:24:11.441: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-z9w6g
Oct 8 15:24:11.443: INFO: Get service account default in ns e2e-tests-emptydir-z9w6g failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:24:13.446: INFO: Service account default in ns e2e-tests-emptydir-z9w6g with secrets found. (2.004932157s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:24:13.446: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-z9w6g
Oct 8 15:24:13.447: INFO: Service account default in ns e2e-tests-emptydir-z9w6g with secrets found. (1.06053ms)
[It] should support (non-root,0777,default)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:94
STEP: Creating a pod to test emptydir 0777 on node default medium
Oct 8 15:24:13.450: INFO: Waiting up to 5m0s for pod pod-4dd9a8a8-6e0b-11e5-bcd2-28d244b00276 status to be success or failure
Oct 8 15:24:13.452: INFO: No Status.Info for container 'test-container' in pod 'pod-4dd9a8a8-6e0b-11e5-bcd2-28d244b00276' yet
Oct 8 15:24:13.452: INFO: Waiting for pod pod-4dd9a8a8-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-z9w6g' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.374064ms elapsed)
Oct 8 15:24:15.453: INFO: No Status.Info for container 'test-container' in pod 'pod-4dd9a8a8-6e0b-11e5-bcd2-28d244b00276' yet
Oct 8 15:24:15.453: INFO: Waiting for pod pod-4dd9a8a8-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-z9w6g' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.00299138s elapsed)
Oct 8 15:24:17.455: INFO: No Status.Info for container 'test-container' in pod 'pod-4dd9a8a8-6e0b-11e5-bcd2-28d244b00276' yet
Oct 8 15:24:17.455: INFO: Waiting for pod pod-4dd9a8a8-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-z9w6g' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.004831479s elapsed)
Oct 8 15:24:19.457: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-4dd9a8a8-6e0b-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-z9w6g' so far
Oct 8 15:24:19.457: INFO: Waiting for pod pod-4dd9a8a8-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-z9w6g' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.006612269s elapsed)
Oct 8 15:24:21.459: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-4dd9a8a8-6e0b-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-z9w6g' so far
Oct 8 15:24:21.459: INFO: Waiting for pod pod-4dd9a8a8-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-z9w6g' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.008370854s elapsed)
Oct 8 15:24:23.460: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-4dd9a8a8-6e0b-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-z9w6g' so far
Oct 8 15:24:23.460: INFO: Waiting for pod pod-4dd9a8a8-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-z9w6g' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.010218863s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-4dd9a8a8-6e0b-11e5-bcd2-28d244b00276 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rwxrwxrwx
[AfterEach] EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 15:24:25.534: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:24:25.539: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:24:25.539: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-z9w6g" for this suite.
• [SLOW TEST:19.146 seconds]
EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (non-root,0777,default)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:94
------------------------------
Services
should serve multiport endpoints from pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:213
[BeforeEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:24:30.587: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-9y747
Oct 8 15:24:30.589: INFO: Get service account default in ns e2e-tests-services-9y747 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:24:32.590: INFO: Service account default in ns e2e-tests-services-9y747 with secrets found. (2.003019306s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:24:32.590: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-9y747
Oct 8 15:24:32.591: INFO: Service account default in ns e2e-tests-services-9y747 with secrets found. (879.458µs)
[BeforeEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:54
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
[It] should serve multiport endpoints from pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:213
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-9y747
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-9y747 to expose endpoints map[]
Oct 8 15:24:32.637: INFO: Get endpoints failed (2.45878ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Oct 8 15:24:33.639: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-9y747 exposes endpoints map[] (1.00402738s elapsed)
STEP: creating pod pod1 in namespace e2e-tests-services-9y747
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-9y747 to expose endpoints map[pod1:[100]]
Oct 8 15:24:37.657: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.013402682s elapsed, will retry)
Oct 8 15:24:42.673: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (9.029192304s elapsed, will retry)
Oct 8 15:24:44.678: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-9y747 exposes endpoints map[pod1:[100]] (11.034661995s elapsed)
STEP: creating pod pod2 in namespace e2e-tests-services-9y747
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-9y747 to expose endpoints map[pod1:[100] pod2:[101]]
Oct 8 15:24:48.703: INFO: Unexpected endpoints: found map[59e2cad6-6e0b-11e5-956c-28d244b00276:[100]], expected map[pod1:[100] pod2:[101]] (4.022140862s elapsed, will retry)
Oct 8 15:24:53.726: INFO: Unexpected endpoints: found map[59e2cad6-6e0b-11e5-956c-28d244b00276:[100]], expected map[pod1:[100] pod2:[101]] (9.045739406s elapsed, will retry)
Oct 8 15:24:55.740: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-9y747 exposes endpoints map[pod1:[100] pod2:[101]] (11.059099377s elapsed)
STEP: deleting pod pod1 in namespace e2e-tests-services-9y747
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-9y747 to expose endpoints map[pod2:[101]]
Oct 8 15:24:56.752: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-9y747 exposes endpoints map[pod2:[101]] (1.009011265s elapsed)
STEP: deleting pod pod2 in namespace e2e-tests-services-9y747
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-9y747 to expose endpoints map[]
Oct 8 15:24:57.760: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-9y747 exposes endpoints map[] (1.003970484s elapsed)
[AfterEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 15:24:57.871: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:24:57.873: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:24:57.873: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-services-9y747" for this suite.
[AfterEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:64
• [SLOW TEST:32.341 seconds]
Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:861
should serve multiport endpoints from pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:213
------------------------------
KubeProxy
should test kube-proxy
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:100
[BeforeEach] KubeProxy
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:25:02.928: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-e2e-kubeproxy-1inf1
Oct 8 15:25:02.929: INFO: Get service account default in ns e2e-tests-e2e-kubeproxy-1inf1 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:25:04.930: INFO: Service account default in ns e2e-tests-e2e-kubeproxy-1inf1 with secrets found. (2.002071676s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:25:04.930: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-e2e-kubeproxy-1inf1
Oct 8 15:25:04.931: INFO: Service account default in ns e2e-tests-e2e-kubeproxy-1inf1 with secrets found. (966.655µs)
[It] should test kube-proxy
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:100
[AfterEach] KubeProxy
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 15:25:04.934: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:25:04.936: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:25:04.936: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-e2e-kubeproxy-1inf1" for this suite.
S [SKIPPING] [7.017 seconds]
KubeProxy
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:101
should test kube-proxy [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:100
Oct 8 15:25:04.931: Only supported for providers [gce gke aws] (not local)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:211
------------------------------
Kubectl client Kubectl label
should update the label on a resource
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:439
[BeforeEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:77
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
Oct 8 15:25:09.944: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-h5gun
Oct 8 15:25:09.946: INFO: Get service account default in ns e2e-tests-kubectl-h5gun failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:25:11.947: INFO: Service account default in ns e2e-tests-kubectl-h5gun with secrets found. (2.002673722s)
[BeforeEach] Kubectl label
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:415
STEP: creating the pod
Oct 8 15:25:11.947: INFO: Running '/home/yifan/google-cloud-sdk/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth create -f /home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/pod.yaml --namespace=e2e-tests-kubectl-h5gun'
[AfterEach] Kubectl label
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:418
STEP: using delete to clean up resources
Oct 8 15:25:11.965: INFO: Running '/home/yifan/google-cloud-sdk/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth stop --grace-period=0 -f /home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/pod.yaml --namespace=e2e-tests-kubectl-h5gun'
[AfterEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:84
STEP: Destroying namespace for this suite e2e-tests-kubectl-h5gun
• Failure in Spec Setup (BeforeEach) [7.044 seconds]
Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:682
Kubectl label
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:440
should update the label on a resource [BeforeEach]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:439
Oct 8 15:25:11.960: Error running &{/home/yifan/google-cloud-sdk/bin/kubectl [kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth create -f /home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/pod.yaml --namespace=e2e-tests-kubectl-h5gun] [] <nil> the path "/home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/pod.yaml" does not exist
[] <nil> 0xc2088c1a40 exit status 1 <nil> true [0xc20824cd30 0xc20824cd58 0xc20824cda0] [0xc20824cd30 0xc20824cd58 0xc20824cda0] [0xc20824cd50 0xc20824cd78] [0x6bd870 0x6bd870] 0xc2088bb920}:
Command stdout:
stderr:
the path "/home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/pod.yaml" does not exist
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1046
------------------------------
Cadvisor
should be healthy on every node.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:47
[BeforeEach] Cadvisor
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:43
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
[It] should be healthy on every node.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:47
STEP: getting list of nodes
STEP: Querying stats from node 127.0.0.1 using url api/v1/proxy/nodes/127.0.0.1/stats/
------------------------------
Downward API
should provide pod name and namespace as env vars
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downward_api.go:74
[BeforeEach] Downward API
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:25:17.030: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-downward-api-d5qfh
Oct 8 15:25:17.033: INFO: Get service account default in ns e2e-tests-downward-api-d5qfh failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:25:19.034: INFO: Service account default in ns e2e-tests-downward-api-d5qfh with secrets found. (2.004054369s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:25:19.034: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-downward-api-d5qfh
Oct 8 15:25:19.035: INFO: Service account default in ns e2e-tests-downward-api-d5qfh with secrets found. (931.409µs)
[It] should provide pod name and namespace as env vars
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downward_api.go:74
STEP: Creating a pod to test downward api env vars
Oct 8 15:25:19.038: INFO: Waiting up to 5m0s for pod downward-api-74f1abbb-6e0b-11e5-bcd2-28d244b00276 status to be success or failure
Oct 8 15:25:19.040: INFO: No Status.Info for container 'dapi-container' in pod 'downward-api-74f1abbb-6e0b-11e5-bcd2-28d244b00276' yet
Oct 8 15:25:19.040: INFO: Waiting for pod downward-api-74f1abbb-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-downward-api-d5qfh' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.236311ms elapsed)
Oct 8 15:25:21.044: INFO: No Status.Info for container 'dapi-container' in pod 'downward-api-74f1abbb-6e0b-11e5-bcd2-28d244b00276' yet
Oct 8 15:25:21.044: INFO: Waiting for pod downward-api-74f1abbb-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-downward-api-d5qfh' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.005882304s elapsed)
Oct 8 15:25:23.046: INFO: No Status.Info for container 'dapi-container' in pod 'downward-api-74f1abbb-6e0b-11e5-bcd2-28d244b00276' yet
Oct 8 15:25:23.046: INFO: Waiting for pod downward-api-74f1abbb-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-downward-api-d5qfh' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.00769546s elapsed)
Oct 8 15:25:25.048: INFO: Nil State.Terminated for container 'dapi-container' in pod 'downward-api-74f1abbb-6e0b-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-downward-api-d5qfh' so far
Oct 8 15:25:25.048: INFO: Waiting for pod downward-api-74f1abbb-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-downward-api-d5qfh' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.009618669s elapsed)
Oct 8 15:25:27.050: INFO: Nil State.Terminated for container 'dapi-container' in pod 'downward-api-74f1abbb-6e0b-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-downward-api-d5qfh' so far
Oct 8 15:25:27.050: INFO: Waiting for pod downward-api-74f1abbb-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-downward-api-d5qfh' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.011487342s elapsed)
Oct 8 15:25:29.052: INFO: Nil State.Terminated for container 'dapi-container' in pod 'downward-api-74f1abbb-6e0b-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-downward-api-d5qfh' so far
Oct 8 15:25:29.052: INFO: Waiting for pod downward-api-74f1abbb-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-downward-api-d5qfh' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.013755297s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod downward-api-74f1abbb-6e0b-11e5-bcd2-28d244b00276 container dapi-container: <nil>
STEP: Successfully fetched pod logs:KUBERNETES_PORT=tcp://10.0.0.1:443
KUBERNETES_SERVICE_PORT=443
USER=root
AC_APP_NAME=dapi-container
SHLVL=1
HOME=/root
LOGNAME=root
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1
POD_NAME=downward-api-74f1abbb-6e0b-11e5-bcd2-28d244b00276
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
SHELL=/bin/sh
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443
POD_NAMESPACE=e2e-tests-downward-api-d5qfh
PWD=/
KUBERNETES_SERVICE_HOST=10.0.0.1
AC_METADATA_URL=
[AfterEach] Downward API
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 15:25:31.119: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:25:31.120: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:25:31.121: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-downward-api-d5qfh" for this suite.
• [SLOW TEST:19.144 seconds]
Downward API
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downward_api.go:75
should provide pod name and namespace as env vars
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downward_api.go:74
------------------------------
S
------------------------------
Daemon set
should run and stop simple daemon
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:126
[BeforeEach] Daemon set
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:65
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:25:36.170: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-daemonsets-k17y7
Oct 8 15:25:36.170: INFO: Get service account default in ns e2e-tests-daemonsets-k17y7 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:25:38.172: INFO: Service account default in ns e2e-tests-daemonsets-k17y7 with secrets found. (2.002123941s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:25:38.172: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-daemonsets-k17y7
Oct 8 15:25:38.173: INFO: Service account default in ns e2e-tests-daemonsets-k17y7 with secrets found. (942.491µs)
[It] should run and stop simple daemon
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:126
Oct 8 15:25:40.177: INFO: Creating simple daemon set daemon-set
[AfterEach] Daemon set
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:71
STEP: Collecting events from namespace "e2e-tests-daemonsets-k17y7".
Oct 8 15:25:42.187: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 8 15:25:42.187: INFO:
Oct 8 15:25:42.187: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:25:42.188: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:25:42.188: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-daemonsets-k17y7" for this suite.
• Failure [11.027 seconds]
Daemon set
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:182
should run and stop simple daemon [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:126
Expected error:
<*errors.StatusError | 0xc20897a980>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:98
------------------------------
Networking
should provide Internet connection for containers
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:82
[BeforeEach] Networking
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:25:47.196: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nettest-3q3gv
Oct 8 15:25:47.197: INFO: Get service account default in ns e2e-tests-nettest-3q3gv failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:25:49.199: INFO: Service account default in ns e2e-tests-nettest-3q3gv with secrets found. (2.002371081s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:25:49.199: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nettest-3q3gv
Oct 8 15:25:49.200: INFO: Service account default in ns e2e-tests-nettest-3q3gv with secrets found. (937.773µs)
[BeforeEach] Networking
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:52
STEP: Executing a successful http request from the external internet
[It] should provide Internet connection for containers
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:82
STEP: Running container which tries to wget google.com
STEP: Verify that the pod succeed
Oct 8 15:25:49.397: INFO: Waiting up to 5m0s for pod wget-test status to be success or failure
Oct 8 15:25:49.399: INFO: No Status.Info for container 'wget-test-container' in pod 'wget-test' yet
Oct 8 15:25:49.399: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-3q3gv' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.261423ms elapsed)
Oct 8 15:25:51.400: INFO: No Status.Info for container 'wget-test-container' in pod 'wget-test' yet
Oct 8 15:25:51.400: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-3q3gv' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.002665404s elapsed)
Oct 8 15:25:53.402: INFO: No Status.Info for container 'wget-test-container' in pod 'wget-test' yet
Oct 8 15:25:53.402: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-3q3gv' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.0042935s elapsed)
Oct 8 15:25:55.403: INFO: Nil State.Terminated for container 'wget-test-container' in pod 'wget-test' in namespace 'e2e-tests-nettest-3q3gv' so far
Oct 8 15:25:55.403: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-3q3gv' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.006061354s elapsed)
Oct 8 15:25:57.405: INFO: Nil State.Terminated for container 'wget-test-container' in pod 'wget-test' in namespace 'e2e-tests-nettest-3q3gv' so far
Oct 8 15:25:57.405: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-3q3gv' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.007806716s elapsed)
Oct 8 15:25:59.407: INFO: Nil State.Terminated for container 'wget-test-container' in pod 'wget-test' in namespace 'e2e-tests-nettest-3q3gv' so far
Oct 8 15:25:59.407: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-3q3gv' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.009629311s elapsed)
[AfterEach] Networking
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-nettest-3q3gv".
Oct 8 15:26:01.422: INFO: event for wget-test: {scheduler } Scheduled: Successfully assigned wget-test to 127.0.0.1
Oct 8 15:26:01.422: INFO: event for wget-test: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/busybox" already present on machine
Oct 8 15:26:01.422: INFO: event for wget-test: {kubelet 127.0.0.1} Created: Created with rkt id 8b0f0b6a
Oct 8 15:26:01.422: INFO: event for wget-test: {kubelet 127.0.0.1} Started: Started with rkt id 8b0f0b6a
Oct 8 15:26:01.424: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 8 15:26:01.424: INFO: wget-test 127.0.0.1 Failed 30s [{Ready False 0001-01-01 00:00:00 +0000 UTC 2015-10-08 15:25:54 -0700 PDT ContainersNotReady containers with unready status: [wget-test-container]}]
Oct 8 15:26:01.424: INFO:
Oct 8 15:26:01.424: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:26:01.425: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:26:01.425: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-nettest-3q3gv" for this suite.
• Failure [19.273 seconds]
Networking
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:249
should provide Internet connection for containers [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:82
Expected error:
<*errors.errorString | 0xc20821a620>: {
s: "pod 'wget-test' terminated with failure: &{ExitCode:1 Signal:0 Reason: Message: StartedAt:2015-10-08 15:25:53 -0700 PDT FinishedAt:0001-01-01 00:00:00 +0000 UTC ContainerID:}",
}
pod 'wget-test' terminated with failure: &{ExitCode:1 Signal:0 Reason: Message: StartedAt:2015-10-08 15:25:53 -0700 PDT FinishedAt:0001-01-01 00:00:00 +0000 UTC ContainerID:}
not to have occurred
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:81
------------------------------
Pod Disks
should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pd.go:233
[BeforeEach] Pod Disks
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:26:06.470: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pod-disks-j8pz3
Oct 8 15:26:06.470: INFO: Get service account default in ns e2e-tests-pod-disks-j8pz3 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:26:08.472: INFO: Service account default in ns e2e-tests-pod-disks-j8pz3 with secrets found. (2.002268148s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:26:08.472: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pod-disks-j8pz3
Oct 8 15:26:08.473: INFO: Service account default in ns e2e-tests-pod-disks-j8pz3 with secrets found. (984.914µs)
[BeforeEach] Pod Disks
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pd.go:66
[AfterEach] Pod Disks
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 15:26:08.476: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:26:08.477: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:26:08.477: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-pod-disks-j8pz3" for this suite.
S [SKIPPING] in Spec Setup (BeforeEach) [7.016 seconds]
Pod Disks
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pd.go:297
should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [BeforeEach]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pd.go:233
Oct 8 15:26:08.473: Requires at least 2 nodes (not -1)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:211
------------------------------
Kubectl client Kubectl run pod
should create a pod from an image when restart is OnFailure
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:612
[BeforeEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:77
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
Oct 8 15:26:13.486: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-50axa
Oct 8 15:26:13.487: INFO: Get service account default in ns e2e-tests-kubectl-50axa failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:26:15.488: INFO: Service account default in ns e2e-tests-kubectl-50axa with secrets found. (2.00238876s)
[BeforeEach] Kubectl run pod
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:589
[It] should create a pod from an image when restart is OnFailure
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:612
STEP: running the image nginx
Oct 8 15:26:15.488: INFO: Running '/home/yifan/google-cloud-sdk/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth run e2e-test-nginx-pod --restart=OnFailure --image=nginx --namespace=e2e-tests-kubectl-50axa'
[AfterEach] Kubectl run pod
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:593
Oct 8 15:26:15.503: INFO: Running '/home/yifan/google-cloud-sdk/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth stop pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-50axa'
[AfterEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:84
STEP: Destroying namespace for this suite e2e-tests-kubectl-50axa
• Failure [7.045 seconds]
Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:682
Kubectl run pod
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:633
should create a pod from an image when restart is OnFailure [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:612
Oct 8 15:26:15.500: Error running &{/home/yifan/google-cloud-sdk/bin/kubectl [kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth run e2e-test-nginx-pod --restart=OnFailure --image=nginx --namespace=e2e-tests-kubectl-50axa] [] <nil> Error: unknown flag: --restart
Run 'kubectl help' for usage.
[] <nil> 0xc208945c00 exit status 1 <nil> true [0xc20824cc48 0xc20824cca8 0xc20824ccf8] [0xc20824cc48 0xc20824cca8 0xc20824ccf8] [0xc20824cca0 0xc20824cce8] [0x6bd870 0x6bd870] 0xc208b91f20}:
Command stdout:
stderr:
Error: unknown flag: --restart
Run 'kubectl help' for usage.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1046
------------------------------
S
------------------------------
Pods
should be schedule with cpu and memory limits
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:187
[BeforeEach] Pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:26:20.531: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-2syto
Oct 8 15:26:20.534: INFO: Get service account default in ns e2e-tests-pods-2syto failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:26:22.535: INFO: Service account default in ns e2e-tests-pods-2syto with secrets found. (2.003532891s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:26:22.535: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-2syto
Oct 8 15:26:22.536: INFO: Service account default in ns e2e-tests-pods-2syto with secrets found. (965.993µs)
[It] should be schedule with cpu and memory limits
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:187
STEP: creating the pod
Oct 8 15:26:22.539: INFO: Waiting up to 5m0s for pod pod-update-9acb19f4-6e0b-11e5-bcd2-28d244b00276 status to be running
Oct 8 15:26:22.542: INFO: Waiting for pod pod-update-9acb19f4-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-pods-2syto' status to be 'running'(found phase: "Pending", readiness: false) (3.792559ms elapsed)
Oct 8 15:26:24.544: INFO: Waiting for pod pod-update-9acb19f4-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-pods-2syto' status to be 'running'(found phase: "Pending", readiness: false) (2.005788542s elapsed)
Oct 8 15:26:26.546: INFO: Waiting for pod pod-update-9acb19f4-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-pods-2syto' status to be 'running'(found phase: "Pending", readiness: false) (4.007796655s elapsed)
Oct 8 15:26:28.549: INFO: Waiting for pod pod-update-9acb19f4-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-pods-2syto' status to be 'running'(found phase: "Pending", readiness: false) (6.010394645s elapsed)
Oct 8 15:26:30.551: INFO: Waiting for pod pod-update-9acb19f4-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-pods-2syto' status to be 'running'(found phase: "Pending", readiness: false) (8.012500057s elapsed)
Oct 8 15:26:32.553: INFO: Waiting for pod pod-update-9acb19f4-6e0b-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-pods-2syto' status to be 'running'(found phase: "Pending", readiness: false) (10.014534827s elapsed)
Oct 8 15:26:34.555: INFO: Found pod 'pod-update-9acb19f4-6e0b-11e5-bcd2-28d244b00276' on node '127.0.0.1'
[AfterEach] Pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 15:26:34.562: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:26:34.565: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:26:34.565: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-2syto" for this suite.
• [SLOW TEST:19.098 seconds]
Pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:778
should be schedule with cpu and memory limits
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:187
------------------------------
Kubelet regular resource usage tracking
over 30m0s with 35 pods per node.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:129
[BeforeEach] Kubelet
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:26:39.629: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubelet-perf-2hq37
Oct 8 15:26:39.631: INFO: Get service account default in ns e2e-tests-kubelet-perf-2hq37 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:26:41.635: INFO: Service account default in ns e2e-tests-kubelet-perf-2hq37 with secrets found. (2.005288205s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:26:41.635: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubelet-perf-2hq37
Oct 8 15:26:41.637: INFO: Service account default in ns e2e-tests-kubelet-perf-2hq37 with secrets found. (2.341282ms)
[BeforeEach] Kubelet
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:115
[It] over 30m0s with 35 pods per node.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:129
STEP: Creating a RC of 35 pods and wait until all pods of this RC are running
STEP: creating replication controller resource35-a62e6854-6e0b-11e5-bcd2-28d244b00276 in namespace e2e-tests-kubelet-perf-2hq37
Oct 8 15:26:41.652: INFO: Created replication controller with name: resource35-a62e6854-6e0b-11e5-bcd2-28d244b00276, namespace: e2e-tests-kubelet-perf-2hq37, replica count: 35
Oct 8 15:26:51.652: INFO: resource35-a62e6854-6e0b-11e5-bcd2-28d244b00276 Pods: 35 out of 35 created, 0 running, 35 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 15:27:01.658: INFO: resource35-a62e6854-6e0b-11e5-bcd2-28d244b00276 Pods: 35 out of 35 created, 0 running, 35 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 15:27:11.658: INFO: resource35-a62e6854-6e0b-11e5-bcd2-28d244b00276 Pods: 35 out of 35 created, 0 running, 35 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 15:27:21.658: INFO: resource35-a62e6854-6e0b-11e5-bcd2-28d244b00276 Pods: 35 out of 35 created, 1 running, 34 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 15:27:31.658: INFO: resource35-a62e6854-6e0b-11e5-bcd2-28d244b00276 Pods: 35 out of 35 created, 5 running, 30 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 15:27:41.659: INFO: resource35-a62e6854-6e0b-11e5-bcd2-28d244b00276 Pods: 35 out of 35 created, 6 running, 29 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 15:27:51.660: INFO: resource35-a62e6854-6e0b-11e5-bcd2-28d244b00276 Pods: 35 out of 35 created, 15 running, 20 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 15:28:01.663: INFO: resource35-a62e6854-6e0b-11e5-bcd2-28d244b00276 Pods: 35 out of 35 created, 16 running, 19 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 15:28:11.663: INFO: resource35-a62e6854-6e0b-11e5-bcd2-28d244b00276 Pods: 35 out of 35 created, 24 running, 11 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 15:28:21.663: INFO: resource35-a62e6854-6e0b-11e5-bcd2-28d244b00276 Pods: 35 out of 35 created, 35 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 15:28:21.663: INFO:
Resource usage on node "127.0.0.1":
container cpu(cores) memory(MB)
"/" 1.035 6423.38
STEP: Start monitoring resource usage
Oct 8 15:28:21.663: INFO: Still running...29m59.99999984s left
Oct 8 15:33:21.663: INFO: Still running...24m59.9999062s left
Oct 8 15:38:21.685: INFO: 35 pods are running on node 127.0.0.1
Oct 8 15:38:21.685: INFO: Still running...19m59.9777987s left
Oct 8 15:43:21.685: INFO: Still running...14m59.97768539s left
Oct 8 15:48:21.716: INFO: 35 pods are running on node 127.0.0.1
Oct 8 15:48:21.716: INFO: Still running...9m59.946902565s left
Oct 8 15:53:21.716: INFO: Still running...4m59.946795662s left
Oct 8 15:58:21.687: INFO: 35 pods are running on node 127.0.0.1
STEP: Reporting overall resource usage
Oct 8 15:58:21.693: INFO: 35 pods are running on node 127.0.0.1
Oct 8 15:58:21.694: INFO:
CPU usage of containers on node "127.0.0.1":
container 5th% 20th% 50th% 70th% 90th% 95th% 99th%
"/" 1.050 1.137 1.352 1.603 2.603 2.969 3.371
Oct 8 15:58:21.694: INFO:
Resource usage on node "127.0.0.1":
container cpu(cores) memory(MB)
"/" 3.794 5983.34
STEP: Deleting the RC
STEP: deleting replication controller resource35-a62e6854-6e0b-11e5-bcd2-28d244b00276 in namespace e2e-tests-kubelet-perf-2hq37
Oct 8 15:58:23.724: INFO: Deleting RC resource35-a62e6854-6e0b-11e5-bcd2-28d244b00276 took: 2.027437748s
Oct 8 15:58:59.782: INFO: Terminating RC resource35-a62e6854-6e0b-11e5-bcd2-28d244b00276 pods took: 36.057811609s
[AfterEach] Kubelet
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 15:58:59.782: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:58:59.783: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:58:59.783: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-kubelet-perf-2hq37" for this suite.
[AfterEach] Kubelet
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:119
• [SLOW TEST:1955.189 seconds]
Kubelet
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:143
regular resource usage tracking
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:131
over 30m0s with 35 pods per node.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:129
------------------------------
S
------------------------------
Kubectl client Kubectl logs
should be able to retrieve and filter logs
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:495
[BeforeEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:77
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
Oct 8 15:59:14.818: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-eiirt
Oct 8 15:59:14.819: INFO: Get service account default in ns e2e-tests-kubectl-eiirt failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:59:16.820: INFO: Service account default in ns e2e-tests-kubectl-eiirt with secrets found. (2.002149071s)
[BeforeEach] Kubectl logs
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:454
STEP: creating an rc
Oct 8 15:59:16.820: INFO: Running '/home/yifan/google-cloud-sdk/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth create -f /home/yifan/gopher/src/github.com/coreos/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-eiirt'
[AfterEach] Kubectl logs
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:457
STEP: using delete to clean up resources
Oct 8 15:59:16.839: INFO: Running '/home/yifan/google-cloud-sdk/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth stop --grace-period=0 -f /home/yifan/gopher/src/github.com/coreos/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-eiirt'
[AfterEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:84
STEP: Destroying namespace for this suite e2e-tests-kubectl-eiirt
• Failure in Spec Setup (BeforeEach) [7.081 seconds]
Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:682
Kubectl logs
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:496
should be able to retrieve and filter logs [BeforeEach]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:495
Oct 8 15:59:16.833: Error running &{/home/yifan/google-cloud-sdk/bin/kubectl [kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth create -f /home/yifan/gopher/src/github.com/coreos/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-eiirt] [] <nil> the path "/home/yifan/gopher/src/github.com/coreos/kubernetes/examples/guestbook-go/redis-master-controller.json" does not exist
[] <nil> 0xc208b853e0 exit status 1 <nil> true [0xc20824c588 0xc20824c5b0 0xc20824c5d8] [0xc20824c588 0xc20824c5b0 0xc20824c5d8] [0xc20824c5a0 0xc20824c5c8] [0x6bd870 0x6bd870] 0xc208402240}:
Command stdout:
stderr:
the path "/home/yifan/gopher/src/github.com/coreos/kubernetes/examples/guestbook-go/redis-master-controller.json" does not exist
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1046
------------------------------
DNS
should provide DNS for the cluster
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/dns.go:223
[BeforeEach] DNS
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:59:21.899: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-dns-42rad
Oct 8 15:59:21.901: INFO: Get service account default in ns e2e-tests-dns-42rad failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:59:23.903: INFO: Service account default in ns e2e-tests-dns-42rad with secrets found. (2.004105436s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:59:23.903: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-dns-42rad
Oct 8 15:59:23.905: INFO: Service account default in ns e2e-tests-dns-42rad with secrets found. (1.372086ms)
[It] should provide DNS for the cluster
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/dns.go:223
STEP: Waiting for DNS Service to be Running
[AfterEach] DNS
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-dns-42rad".
Oct 8 15:59:23.914: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 8 15:59:23.914: INFO:
Oct 8 15:59:23.914: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:59:23.916: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:59:23.916: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-dns-42rad" for this suite.
• Failure [7.031 seconds]
DNS
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/dns.go:309
should provide DNS for the cluster [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/dns.go:223
Oct 8 15:59:23.906: Unexpected number of pods (0) matches the label selector k8s-app=kube-dns,kubernetes.io/cluster-service=true
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/dns.go:199
------------------------------
Restart
should restart all nodes and ensure all nodes and pods recover
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/restart.go:125
[BeforeEach] Restart
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/restart.go:68
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
[AfterEach] Restart
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/restart.go:76
S [SKIPPING] in Spec Setup (BeforeEach) [0.005 seconds]
Restart
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/restart.go:126
should restart all nodes and ensure all nodes and pods recover [BeforeEach]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/restart.go:125
Oct 8 15:59:28.928: Only supported for providers [gce gke] (not local)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:211
------------------------------
S
------------------------------
Port forwarding With a server that expects a client request
should support a client that connects, sends data, and disconnects
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:185
[BeforeEach] Port forwarding
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:59:28.938: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-port-forwarding-vvcae
Oct 8 15:59:28.940: INFO: Get service account default in ns e2e-tests-port-forwarding-vvcae failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:59:30.945: INFO: Service account default in ns e2e-tests-port-forwarding-vvcae with secrets found. (2.006686998s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:59:30.945: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-port-forwarding-vvcae
Oct 8 15:59:30.947: INFO: Service account default in ns e2e-tests-port-forwarding-vvcae with secrets found. (1.443672ms)
[It] should support a client that connects, sends data, and disconnects
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:185
STEP: creating the target pod
Oct 8 15:59:30.954: INFO: Waiting up to 5m0s for pod pfpod status to be running
Oct 8 15:59:30.956: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-vvcae' status to be 'running'(found phase: "Pending", readiness: false) (2.374775ms elapsed)
Oct 8 15:59:32.958: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-vvcae' status to be 'running'(found phase: "Pending", readiness: false) (2.003972989s elapsed)
Oct 8 15:59:34.959: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-vvcae' status to be 'running'(found phase: "Pending", readiness: false) (4.005643072s elapsed)
Oct 8 15:59:36.961: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-vvcae' status to be 'running'(found phase: "Pending", readiness: false) (6.007040763s elapsed)
Oct 8 15:59:38.964: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-vvcae' status to be 'running'(found phase: "Pending", readiness: false) (8.009968253s elapsed)
Oct 8 15:59:40.965: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-vvcae' status to be 'running'(found phase: "Pending", readiness: false) (10.011612707s elapsed)
Oct 8 15:59:42.967: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-vvcae' status to be 'running'(found phase: "Pending", readiness: false) (12.01335454s elapsed)
Oct 8 15:59:44.969: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-vvcae' status to be 'running'(found phase: "Pending", readiness: false) (14.015047451s elapsed)
Oct 8 15:59:46.970: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-vvcae' status to be 'running'(found phase: "Pending", readiness: false) (16.016665254s elapsed)
Oct 8 15:59:48.972: INFO: Found pod 'pfpod' on node '127.0.0.1'
STEP: Running 'kubectl port-forward'
Oct 8 15:59:48.972: INFO: starting port-forward command and streaming output
Oct 8 15:59:48.972: INFO: Asynchronously running '/home/yifan/google-cloud-sdk/bin/kubectl kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth port-forward --namespace=e2e-tests-port-forwarding-vvcae pfpod :80'
Oct 8 15:59:48.973: INFO: reading from `kubectl port-forward` command's stderr
[AfterEach] Port forwarding
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-port-forwarding-vvcae".
Oct 8 15:59:48.992: INFO: event for pfpod: {scheduler } Scheduled: Successfully assigned pfpod to 127.0.0.1
Oct 8 15:59:48.992: INFO: event for pfpod: {kubelet 127.0.0.1} Pulling: Pulling image "gcr.io/google_containers/portforwardtester:1.0"
Oct 8 15:59:48.992: INFO: event for pfpod: {kubelet 127.0.0.1} Pulled: Successfully pulled image "gcr.io/google_containers/portforwardtester:1.0"
Oct 8 15:59:48.992: INFO: event for pfpod: {kubelet 127.0.0.1} Created: Created with rkt id 5c4864d0
Oct 8 15:59:48.992: INFO: event for pfpod: {kubelet 127.0.0.1} Started: Started with rkt id 5c4864d0
Oct 8 15:59:48.996: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 8 15:59:48.996: INFO: pfpod 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 15:59:41 -0700 PDT }]
Oct 8 15:59:48.996: INFO:
Oct 8 15:59:48.996: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 15:59:48.998: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 15:59:48.998: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-port-forwarding-vvcae" for this suite.
• Failure [25.071 seconds]
Port forwarding
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:223
With a server that expects a client request
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:186
should support a client that connects, sends data, and disconnects [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:185
Oct 8 15:59:48.987: Failed to parse kubectl port-forward output: error: POD_NAME is required for port-forward
see 'kubectl port-forward -h' for help.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:102
------------------------------
SS
------------------------------
Port forwarding With a server that expects no client request
should support a client that connects, sends no data, and disconnects
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:221
[BeforeEach] Port forwarding
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 15:59:54.007: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-port-forwarding-qs1y1
Oct 8 15:59:54.009: INFO: Get service account default in ns e2e-tests-port-forwarding-qs1y1 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 15:59:56.010: INFO: Service account default in ns e2e-tests-port-forwarding-qs1y1 with secrets found. (2.003123941s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 15:59:56.010: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-port-forwarding-qs1y1
Oct 8 15:59:56.011: INFO: Service account default in ns e2e-tests-port-forwarding-qs1y1 with secrets found. (988.847µs)
[It] should support a client that connects, sends no data, and disconnects
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:221
STEP: creating the target pod
Oct 8 15:59:56.014: INFO: Waiting up to 5m0s for pod pfpod status to be running
Oct 8 15:59:56.015: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-qs1y1' status to be 'running'(found phase: "Pending", readiness: false) (1.789892ms elapsed)
Oct 8 15:59:58.017: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-qs1y1' status to be 'running'(found phase: "Pending", readiness: false) (2.003273588s elapsed)
Oct 8 16:00:00.019: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-qs1y1' status to be 'running'(found phase: "Pending", readiness: false) (4.005677215s elapsed)
Oct 8 16:00:02.021: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-qs1y1' status to be 'running'(found phase: "Pending", readiness: false) (6.007459181s elapsed)
Oct 8 16:00:04.023: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-qs1y1' status to be 'running'(found phase: "Pending", readiness: false) (8.009263161s elapsed)
Oct 8 16:00:06.025: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-qs1y1' status to be 'running'(found phase: "Pending", readiness: false) (10.011007227s elapsed)
Oct 8 16:00:08.026: INFO: Found pod 'pfpod' on node '127.0.0.1'
STEP: Running 'kubectl port-forward'
Oct 8 16:00:08.026: INFO: starting port-forward command and streaming output
Oct 8 16:00:08.026: INFO: Asynchronously running '/home/yifan/google-cloud-sdk/bin/kubectl kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth port-forward --namespace=e2e-tests-port-forwarding-qs1y1 pfpod :80'
Oct 8 16:00:08.027: INFO: reading from `kubectl port-forward` command's stderr
[AfterEach] Port forwarding
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-port-forwarding-qs1y1".
Oct 8 16:00:08.038: INFO: event for pfpod: {scheduler } Scheduled: Successfully assigned pfpod to 127.0.0.1
Oct 8 16:00:08.039: INFO: event for pfpod: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/portforwardtester:1.0" already present on machine
Oct 8 16:00:08.039: INFO: event for pfpod: {kubelet 127.0.0.1} Created: Created with rkt id c266f33d
Oct 8 16:00:08.039: INFO: event for pfpod: {kubelet 127.0.0.1} Started: Started with rkt id c266f33d
Oct 8 16:00:08.040: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 8 16:00:08.040: INFO: pfpod 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:00:06 -0700 PDT }]
Oct 8 16:00:08.040: INFO:
Oct 8 16:00:08.040: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:00:08.042: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:00:08.042: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-port-forwarding-qs1y1" for this suite.
• Failure [19.043 seconds]
Port forwarding
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:223
With a server that expects no client request
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:222
should support a client that connects, sends no data, and disconnects [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:221
Oct 8 16:00:08.034: Failed to parse kubectl port-forward output: error: POD_NAME is required for port-forward
see 'kubectl port-forward -h' for help.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:102
------------------------------
Pods
should be restarted with a /healthz http liveness probe
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:543
[BeforeEach] Pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:00:13.050: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-s75xf
Oct 8 16:00:13.051: INFO: Get service account default in ns e2e-tests-pods-s75xf failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:00:15.053: INFO: Service account default in ns e2e-tests-pods-s75xf with secrets found. (2.003151727s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:00:15.053: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-s75xf
Oct 8 16:00:15.054: INFO: Service account default in ns e2e-tests-pods-s75xf with secrets found. (1.04271ms)
[It] should be restarted with a /healthz http liveness probe
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:543
STEP: Creating pod liveness-http in namespace e2e-tests-pods-s75xf
Oct 8 16:00:15.056: INFO: Waiting up to 5m0s for pod liveness-http status to be !pending
Oct 8 16:00:15.058: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-s75xf' status to be '!pending'(found phase: "Pending", readiness: false) (1.840586ms elapsed)
Oct 8 16:00:17.060: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-s75xf' status to be '!pending'(found phase: "Pending", readiness: false) (2.003402356s elapsed)
Oct 8 16:00:19.062: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-s75xf' status to be '!pending'(found phase: "Pending", readiness: false) (4.005511981s elapsed)
Oct 8 16:00:21.064: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-s75xf' status to be '!pending'(found phase: "Pending", readiness: false) (6.00743018s elapsed)
Oct 8 16:00:23.066: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-s75xf' status to be '!pending'(found phase: "Pending", readiness: false) (8.009290271s elapsed)
Oct 8 16:00:25.068: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-s75xf' status to be '!pending'(found phase: "Pending", readiness: false) (10.011342673s elapsed)
Oct 8 16:00:27.070: INFO: Saw pod 'liveness-http' in namespace 'e2e-tests-pods-s75xf' out of pending state (found '"Running"')
STEP: Started pod liveness-http in namespace e2e-tests-pods-s75xf
STEP: checking the pod's current state and verifying that restartCount is present
STEP: Initial restart count of pod liveness-http is 0
STEP: Restart count of pod e2e-tests-pods-s75xf/liveness-http is now 1 (14.016159567s elapsed)
STEP: deleting the pod
[AfterEach] Pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 16:00:41.178: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:00:41.179: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:00:41.179: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-s75xf" for this suite.
• [SLOW TEST:33.190 seconds]
Pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:778
should be restarted with a /healthz http liveness probe
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:543
------------------------------
Probing container
with readiness probe should not be ready before initial delay and never restart
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:74
[BeforeEach] Probing container
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:39
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:00:46.239: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-container-probe-fwaqp
Oct 8 16:00:46.241: INFO: Get service account default in ns e2e-tests-container-probe-fwaqp failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:00:48.242: INFO: Service account default in ns e2e-tests-container-probe-fwaqp with secrets found. (2.002673853s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:00:48.242: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-container-probe-fwaqp
Oct 8 16:00:48.243: INFO: Service account default in ns e2e-tests-container-probe-fwaqp with secrets found. (1.125496ms)
[It] with readiness probe should not be ready before initial delay and never restart
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:74
Oct 8 16:00:50.250: INFO: pod is not yet ready; pod has phase "Pending".
Oct 8 16:00:52.251: INFO: pod is not yet ready; pod has phase "Pending".
Oct 8 16:00:54.251: INFO: pod is not yet ready; pod has phase "Pending".
Oct 8 16:00:56.252: INFO: pod is not yet ready; pod has phase "Pending".
Oct 8 16:00:58.251: INFO: pod is not yet ready; pod has phase "Pending".
Oct 8 16:01:00.251: INFO: pod is not yet ready; pod has phase "Running".
Oct 8 16:01:02.250: INFO: pod is not yet ready; pod has phase "Running".
Oct 8 16:01:04.250: INFO: pod is not yet ready; pod has phase "Running".
Oct 8 16:01:06.250: INFO: pod is not yet ready; pod has phase "Running".
Oct 8 16:01:08.250: INFO: pod is not yet ready; pod has phase "Running".
Oct 8 16:01:10.250: INFO: pod is not yet ready; pod has phase "Running".
Oct 8 16:01:12.252: INFO: pod is not yet ready; pod has phase "Running".
Oct 8 16:01:14.250: INFO: pod is not yet ready; pod has phase "Running".
Oct 8 16:01:16.250: INFO: pod is not yet ready; pod has phase "Running".
Oct 8 16:01:18.251: INFO: pod is not yet ready; pod has phase "Running".
Oct 8 16:01:20.251: INFO: pod is not yet ready; pod has phase "Running".
Oct 8 16:01:22.250: INFO: pod is not yet ready; pod has phase "Running".
Oct 8 16:01:24.250: INFO: pod is not yet ready; pod has phase "Running".
Oct 8 16:01:26.250: INFO: pod is not yet ready; pod has phase "Running".
Oct 8 16:01:28.250: INFO: pod is not yet ready; pod has phase "Running".
Oct 8 16:01:30.250: INFO: pod is not yet ready; pod has phase "Running".
Oct 8 16:01:32.251: INFO: pod is not yet ready; pod has phase "Running".
Oct 8 16:01:34.250: INFO: pod is not yet ready; pod has phase "Running".
Oct 8 16:01:36.250: INFO: pod is not yet ready; pod has phase "Running".
Oct 8 16:01:38.250: INFO: pod is not yet ready; pod has phase "Running".
[AfterEach] Probing container
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:41
Oct 8 16:01:40.252: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:01:40.253: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:01:40.253: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-container-probe-fwaqp" for this suite.
• [SLOW TEST:59.022 seconds]
Probing container
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:101
with readiness probe should not be ready before initial delay and never restart
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:74
------------------------------
Port forwarding With a server that expects a client request
should support a client that connects, sends no data, and disconnects
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:139
[BeforeEach] Port forwarding
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:01:45.261: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-port-forwarding-m11q8
Oct 8 16:01:45.262: INFO: Get service account default in ns e2e-tests-port-forwarding-m11q8 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:01:47.264: INFO: Service account default in ns e2e-tests-port-forwarding-m11q8 with secrets found. (2.002233374s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:01:47.264: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-port-forwarding-m11q8
Oct 8 16:01:47.265: INFO: Service account default in ns e2e-tests-port-forwarding-m11q8 with secrets found. (970.724µs)
[It] should support a client that connects, sends no data, and disconnects
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:139
STEP: creating the target pod
Oct 8 16:01:47.267: INFO: Waiting up to 5m0s for pod pfpod status to be running
Oct 8 16:01:47.269: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-m11q8' status to be 'running'(found phase: "Pending", readiness: false) (1.860802ms elapsed)
Oct 8 16:01:49.270: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-m11q8' status to be 'running'(found phase: "Pending", readiness: false) (2.003328499s elapsed)
Oct 8 16:01:51.272: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-m11q8' status to be 'running'(found phase: "Pending", readiness: false) (4.004923867s elapsed)
Oct 8 16:01:53.274: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-m11q8' status to be 'running'(found phase: "Pending", readiness: false) (6.006829347s elapsed)
Oct 8 16:01:55.276: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-m11q8' status to be 'running'(found phase: "Pending", readiness: false) (8.008694757s elapsed)
Oct 8 16:01:57.277: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-m11q8' status to be 'running'(found phase: "Pending", readiness: false) (10.010437096s elapsed)
Oct 8 16:01:59.279: INFO: Found pod 'pfpod' on node '127.0.0.1'
STEP: Running 'kubectl port-forward'
Oct 8 16:01:59.279: INFO: starting port-forward command and streaming output
Oct 8 16:01:59.279: INFO: Asynchronously running '/home/yifan/google-cloud-sdk/bin/kubectl kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth port-forward --namespace=e2e-tests-port-forwarding-m11q8 pfpod :80'
Oct 8 16:01:59.280: INFO: reading from `kubectl port-forward` command's stderr
[AfterEach] Port forwarding
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-port-forwarding-m11q8".
Oct 8 16:01:59.295: INFO: event for pfpod: {scheduler } Scheduled: Successfully assigned pfpod to 127.0.0.1
Oct 8 16:01:59.295: INFO: event for pfpod: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/portforwardtester:1.0" already present on machine
Oct 8 16:01:59.295: INFO: event for pfpod: {kubelet 127.0.0.1} Created: Created with rkt id 7e8609e6
Oct 8 16:01:59.295: INFO: event for pfpod: {kubelet 127.0.0.1} Started: Started with rkt id 7e8609e6
Oct 8 16:01:59.297: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 8 16:01:59.297: INFO: pfpod 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:01:57 -0700 PDT }]
Oct 8 16:01:59.297: INFO:
Oct 8 16:01:59.297: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:01:59.298: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:01:59.298: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-port-forwarding-m11q8" for this suite.
• Failure [19.046 seconds]
Port forwarding
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:223
With a server that expects a client request
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:186
should support a client that connects, sends no data, and disconnects [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:139
Oct 8 16:01:59.290: Failed to parse kubectl port-forward output: error: POD_NAME is required for port-forward
see 'kubectl port-forward -h' for help.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:102
------------------------------
Kubectl client Guestbook application
should create and stop a working application
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:141
[BeforeEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:77
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
Oct 8 16:02:04.308: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-0fila
Oct 8 16:02:04.309: INFO: Get service account default in ns e2e-tests-kubectl-0fila failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:02:06.310: INFO: Service account default in ns e2e-tests-kubectl-0fila with secrets found. (2.002184576s)
[BeforeEach] Guestbook application
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:131
[It] should create and stop a working application
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:141
STEP: creating all guestbook components
Oct 8 16:02:06.310: INFO: Running '/home/yifan/google-cloud-sdk/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth create -f /home/yifan/gopher/src/github.com/coreos/kubernetes/examples/guestbook --namespace=e2e-tests-kubectl-0fila'
STEP: using delete to clean up resources
Oct 8 16:02:06.323: INFO: Running '/home/yifan/google-cloud-sdk/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth stop --grace-period=0 -f /home/yifan/gopher/src/github.com/coreos/kubernetes/examples/guestbook --namespace=e2e-tests-kubectl-0fila'
[AfterEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:84
STEP: Destroying namespace for this suite e2e-tests-kubectl-0fila
• Failure [7.037 seconds]
Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:682
Guestbook application
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:142
should create and stop a working application [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:141
Oct 8 16:02:06.322: Error running &{/home/yifan/google-cloud-sdk/bin/kubectl [kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth create -f /home/yifan/gopher/src/github.com/coreos/kubernetes/examples/guestbook --namespace=e2e-tests-kubectl-0fila] [] <nil> the path "/home/yifan/gopher/src/github.com/coreos/kubernetes/examples/guestbook" does not exist
[] <nil> 0xc208daab40 exit status 1 <nil> true [0xc20824c998 0xc20824c9e8 0xc20824ca50] [0xc20824c998 0xc20824c9e8 0xc20824ca50] [0xc20824c9b0 0xc20824ca48] [0x6bd870 0x6bd870] 0xc208b90300}:
Command stdout:
stderr:
the path "/home/yifan/gopher/src/github.com/coreos/kubernetes/examples/guestbook" does not exist
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1046
------------------------------
Kubectl client Kubectl run rc
should create an rc from an image
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:578
[BeforeEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:77
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
Oct 8 16:02:11.345: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-oxx1j
Oct 8 16:02:11.345: INFO: Get service account default in ns e2e-tests-kubectl-oxx1j failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:02:13.347: INFO: Service account default in ns e2e-tests-kubectl-oxx1j with secrets found. (2.002122777s)
[BeforeEach] Kubectl run rc
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:546
[It] should create an rc from an image
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:578
STEP: running the image nginx
Oct 8 16:02:13.347: INFO: Running '/home/yifan/google-cloud-sdk/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth run e2e-test-nginx-rc --image=nginx --namespace=e2e-tests-kubectl-oxx1j'
Oct 8 16:02:13.410: INFO: CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
e2e-test-nginx-rc e2e-test-nginx-rc nginx run=e2e-test-nginx-rc 1
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
[AfterEach] Kubectl run rc
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:550
Oct 8 16:02:15.414: INFO: Running '/home/yifan/google-cloud-sdk/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth stop rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-oxx1j'
Oct 8 16:02:17.437: INFO: replicationcontrollers/e2e-test-nginx-rc
[AfterEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:84
STEP: Destroying namespace for this suite e2e-tests-kubectl-oxx1j
• [SLOW TEST:11.188 seconds]
Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:682
Kubectl run rc
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:580
should create an rc from an image
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:578
------------------------------
kube-ui
should check that the kube-ui instance is alive
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kube-ui.go:83
[BeforeEach] kube-ui
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:02:22.533: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kube-ui-93f60
Oct 8 16:02:22.535: INFO: Get service account default in ns e2e-tests-kube-ui-93f60 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:02:24.536: INFO: Service account default in ns e2e-tests-kube-ui-93f60 with secrets found. (2.003477846s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:02:24.536: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kube-ui-93f60
Oct 8 16:02:24.537: INFO: Service account default in ns e2e-tests-kube-ui-93f60 with secrets found. (968.724µs)
[It] should check that the kube-ui instance is alive
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kube-ui.go:83
STEP: Checking the kube-ui service exists.
[AfterEach] kube-ui
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-kube-ui-93f60".
Oct 8 16:03:24.544: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 8 16:03:24.544: INFO:
Oct 8 16:03:24.544: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:03:24.546: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:03:24.546: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-kube-ui-93f60" for this suite.
• Failure [67.022 seconds]
kube-ui
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kube-ui.go:84
should check that the kube-ui instance is alive [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kube-ui.go:83
Expected error:
<*errors.errorString | 0xc2084c1a30>: {
s: "error waiting for service kube-system/kube-ui to appear: timed out waiting for the condition",
}
error waiting for service kube-system/kube-ui to appear: timed out waiting for the condition
not to have occurred
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kube-ui.go:44
------------------------------
Job
should keep restarting failed pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:106
[BeforeEach] Job
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:03:29.556: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-job-zloph
Oct 8 16:03:29.557: INFO: Get service account default in ns e2e-tests-job-zloph failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:03:31.558: INFO: Service account default in ns e2e-tests-job-zloph with secrets found. (2.002335504s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:03:31.558: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-job-zloph
Oct 8 16:03:31.559: INFO: Service account default in ns e2e-tests-job-zloph with secrets found. (880.907µs)
[It] should keep restarting failed pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:106
STEP: Creating a job
[AfterEach] Job
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-job-zloph".
Oct 8 16:03:31.566: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 8 16:03:31.566: INFO:
Oct 8 16:03:31.566: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:03:31.567: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:03:31.568: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-job-zloph" for this suite.
• Failure [7.020 seconds]
Job
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:183
should keep restarting failed pods [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:106
Expected error:
<*errors.StatusError | 0xc208a75b80>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:96
------------------------------
Proxy version v1
should proxy logs on node with explicit kubelet port
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:56
[BeforeEach] version v1
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:03:36.576: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-kyrd8
Oct 8 16:03:36.577: INFO: Get service account default in ns e2e-tests-proxy-kyrd8 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:03:38.579: INFO: Service account default in ns e2e-tests-proxy-kyrd8 with secrets found. (2.002075972s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:03:38.579: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-kyrd8
Oct 8 16:03:38.579: INFO: Service account default in ns e2e-tests-proxy-kyrd8 with secrets found. (902.523µs)
[It] should proxy logs on node with explicit kubelet port
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:56
Oct 8 16:03:38.584: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 3.25985ms)
Oct 8 16:03:38.586: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 1.614052ms)
Oct 8 16:03:38.587: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 1.773355ms)
Oct 8 16:03:38.589: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 1.719118ms)
Oct 8 16:03:38.591: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 1.726569ms)
Oct 8 16:03:38.593: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 1.728758ms)
Oct 8 16:03:38.594: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 1.722533ms)
Oct 8 16:03:38.776: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 181.124138ms)
Oct 8 16:03:38.976: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 200.057366ms)
Oct 8 16:03:39.176: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 200.778958ms)
Oct 8 16:03:39.376: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 199.053576ms)
Oct 8 16:03:39.576: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 200.055197ms)
Oct 8 16:03:39.776: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 199.907044ms)
Oct 8 16:03:39.977: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 201.155812ms)
Oct 8 16:03:40.176: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 199.46127ms)
Oct 8 16:03:40.376: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 199.32624ms)
Oct 8 16:03:40.576: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 200.664192ms)
Oct 8 16:03:40.776: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 199.181998ms)
Oct 8 16:03:40.976: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 200.099538ms)
Oct 8 16:03:41.178: INFO: /api/v1/proxy/nodes/127.0.0.1:10250/logs/: <pre>
<a href="kern.log.2.gz">kern.log.2.gz</a>
<a href="syslog.3.gz">syslog.3.gz</a>
<a href="li... (200; 201.992751ms)
[AfterEach] version v1
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 16:03:41.178: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:03:41.376: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:03:41.376: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-proxy-kyrd8" for this suite.
• [SLOW TEST:5.401 seconds]
Proxy
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:41
version v1
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:40
should proxy logs on node with explicit kubelet port
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:56
------------------------------
Reboot
each node by dropping all inbound packets for a while and ensure they function afterwards
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/reboot.go:91
[BeforeEach] Reboot
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/reboot.go:59
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
S [SKIPPING] in Spec Setup (BeforeEach) [0.003 seconds]
Reboot
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/reboot.go:100
each node by dropping all inbound packets for a while and ensure they function afterwards [BeforeEach]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/reboot.go:91
Oct 8 16:03:41.975: Only supported for providers [gce gke aws] (not local)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:211
------------------------------
Docker Containers
should use the image defaults if command and args are blank
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:53
[BeforeEach] Docker Containers
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:41
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
Oct 8 16:03:41.980: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-containers-s4bj7
Oct 8 16:03:41.981: INFO: Get service account default in ns e2e-tests-containers-s4bj7 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:03:43.983: INFO: Service account default in ns e2e-tests-containers-s4bj7 with secrets found. (2.002687991s)
[It] should use the image defaults if command and args are blank
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:53
STEP: Creating a pod to test use defaults
Oct 8 16:03:43.985: INFO: Waiting up to 5m0s for pod client-containers-d2ccbbe5-6e10-11e5-bcd2-28d244b00276 status to be success or failure
Oct 8 16:03:43.986: INFO: No Status.Info for container 'test-container' in pod 'client-containers-d2ccbbe5-6e10-11e5-bcd2-28d244b00276' yet
Oct 8 16:03:43.986: INFO: Waiting for pod client-containers-d2ccbbe5-6e10-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-containers-s4bj7' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.352622ms elapsed)
Oct 8 16:03:45.988: INFO: No Status.Info for container 'test-container' in pod 'client-containers-d2ccbbe5-6e10-11e5-bcd2-28d244b00276' yet
Oct 8 16:03:45.988: INFO: Waiting for pod client-containers-d2ccbbe5-6e10-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-containers-s4bj7' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.00283841s elapsed)
Oct 8 16:03:47.989: INFO: No Status.Info for container 'test-container' in pod 'client-containers-d2ccbbe5-6e10-11e5-bcd2-28d244b00276' yet
Oct 8 16:03:47.989: INFO: Waiting for pod client-containers-d2ccbbe5-6e10-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-containers-s4bj7' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.00437794s elapsed)
Oct 8 16:03:49.991: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-d2ccbbe5-6e10-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-containers-s4bj7' so far
Oct 8 16:03:49.991: INFO: Waiting for pod client-containers-d2ccbbe5-6e10-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-containers-s4bj7' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.0061751s elapsed)
Oct 8 16:03:51.993: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-d2ccbbe5-6e10-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-containers-s4bj7' so far
Oct 8 16:03:51.993: INFO: Waiting for pod client-containers-d2ccbbe5-6e10-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-containers-s4bj7' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.007880579s elapsed)
Oct 8 16:03:53.995: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-d2ccbbe5-6e10-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-containers-s4bj7' so far
Oct 8 16:03:53.995: INFO: Waiting for pod client-containers-d2ccbbe5-6e10-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-containers-s4bj7' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.009640178s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod client-containers-d2ccbbe5-6e10-11e5-bcd2-28d244b00276 container test-container: <nil>
STEP: Successfully fetched pod logs:[/ep default arguments]
[AfterEach] Docker Containers
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:47
• [SLOW TEST:19.155 seconds]
Docker Containers
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:84
should use the image defaults if command and args are blank
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:53
------------------------------
Variable Expansion
should allow substituting values in a container's args
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:128
[BeforeEach] Variable Expansion
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:04:01.137: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-var-expansion-kt4bi
Oct 8 16:04:01.138: INFO: Get service account default in ns e2e-tests-var-expansion-kt4bi failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:04:03.141: INFO: Service account default in ns e2e-tests-var-expansion-kt4bi with secrets found. (2.003929749s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:04:03.141: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-var-expansion-kt4bi
Oct 8 16:04:03.142: INFO: Service account default in ns e2e-tests-var-expansion-kt4bi with secrets found. (1.716459ms)
[It] should allow substituting values in a container's args
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:128
STEP: Creating a pod to test substitution in container's args
Oct 8 16:04:03.147: INFO: Waiting up to 5m0s for pod var-expansion-de3845ab-6e10-11e5-bcd2-28d244b00276 status to be success or failure
Oct 8 16:04:03.159: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-de3845ab-6e10-11e5-bcd2-28d244b00276' yet
Oct 8 16:04:03.159: INFO: Waiting for pod var-expansion-de3845ab-6e10-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-var-expansion-kt4bi' status to be 'success or failure'(found phase: "Pending", readiness: false) (12.682654ms elapsed)
Oct 8 16:04:05.162: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-de3845ab-6e10-11e5-bcd2-28d244b00276' yet
Oct 8 16:04:05.162: INFO: Waiting for pod var-expansion-de3845ab-6e10-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-var-expansion-kt4bi' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.014934629s elapsed)
Oct 8 16:04:07.164: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-de3845ab-6e10-11e5-bcd2-28d244b00276' yet
Oct 8 16:04:07.164: INFO: Waiting for pod var-expansion-de3845ab-6e10-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-var-expansion-kt4bi' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.017263152s elapsed)
Oct 8 16:04:09.167: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-de3845ab-6e10-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-var-expansion-kt4bi' so far
Oct 8 16:04:09.167: INFO: Waiting for pod var-expansion-de3845ab-6e10-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-var-expansion-kt4bi' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.020049688s elapsed)
Oct 8 16:04:11.169: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-de3845ab-6e10-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-var-expansion-kt4bi' so far
Oct 8 16:04:11.169: INFO: Waiting for pod var-expansion-de3845ab-6e10-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-var-expansion-kt4bi' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.022172712s elapsed)
Oct 8 16:04:13.171: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-de3845ab-6e10-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-var-expansion-kt4bi' so far
Oct 8 16:04:13.171: INFO: Waiting for pod var-expansion-de3845ab-6e10-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-var-expansion-kt4bi' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.024399408s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod var-expansion-de3845ab-6e10-11e5-bcd2-28d244b00276 container dapi-container: <nil>
STEP: Successfully fetched pod logs:sh: TEST_VAR: not found
[AfterEach] Variable Expansion
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-var-expansion-kt4bi".
Oct 8 16:04:15.242: INFO: event for var-expansion-de3845ab-6e10-11e5-bcd2-28d244b00276: {scheduler } Scheduled: Successfully assigned var-expansion-de3845ab-6e10-11e5-bcd2-28d244b00276 to 127.0.0.1
Oct 8 16:04:15.242: INFO: event for var-expansion-de3845ab-6e10-11e5-bcd2-28d244b00276: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/busybox" already present on machine
Oct 8 16:04:15.242: INFO: event for var-expansion-de3845ab-6e10-11e5-bcd2-28d244b00276: {kubelet 127.0.0.1} Created: Created with rkt id 22056605
Oct 8 16:04:15.242: INFO: event for var-expansion-de3845ab-6e10-11e5-bcd2-28d244b00276: {kubelet 127.0.0.1} Started: Started with rkt id 22056605
Oct 8 16:04:15.244: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 8 16:04:15.244: INFO:
Oct 8 16:04:15.244: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:04:15.245: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:04:15.245: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-var-expansion-kt4bi" for this suite.
• Failure [19.151 seconds]
Variable Expansion
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:129
should allow substituting values in a container's args [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:128
"test-value" in container output
Expected
<string>: sh: TEST_VAR: not found
to contain substring
<string>: test-value
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1139
------------------------------
S
------------------------------
Kubectl client Proxy server
should support proxy with --port 0
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:652
[BeforeEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:77
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
Oct 8 16:04:20.288: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-r5vxa
Oct 8 16:04:20.289: INFO: Get service account default in ns e2e-tests-kubectl-r5vxa failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:04:22.291: INFO: Service account default in ns e2e-tests-kubectl-r5vxa with secrets found. (2.002723494s)
[It] should support proxy with --port 0
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:652
STEP: starting the proxy server
Oct 8 16:04:22.291: INFO: Asynchronously running '/home/yifan/google-cloud-sdk/bin/kubectl kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth proxy -p 0'
[AfterEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:84
STEP: Destroying namespace for this suite e2e-tests-kubectl-r5vxa
• Failure [7.022 seconds]
Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:682
Proxy server
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:681
should support proxy with --port 0 [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:652
Oct 8 16:04:22.298: Failed to start proxy server: Failed to parse port from proxy stdout: Starting to serve on localhost:0
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:644
------------------------------
SchedulerPredicates
validates MaxPods limit number of pods that are allowed to run.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:229
[BeforeEach] SchedulerPredicates
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:153
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:04:27.309: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-sched-pred-ee5yc
Oct 8 16:04:27.311: INFO: Get service account default in ns e2e-tests-sched-pred-ee5yc failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:04:29.312: INFO: Service account default in ns e2e-tests-sched-pred-ee5yc with secrets found. (2.00279165s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:04:29.312: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-sched-pred-ee5yc
Oct 8 16:04:29.313: INFO: Service account default in ns e2e-tests-sched-pred-ee5yc with secrets found. (916.823µs)
[It] validates MaxPods limit number of pods that are allowed to run.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:229
Oct 8 16:04:29.314: INFO: Node: {{ } {127.0.0.1 /api/v1/nodes/127.0.0.1 312666ae-6e07-11e5-956c-28d244b00276 3479 0 2015-10-08 14:54:47 -0700 PDT <nil> <nil> map[kubernetes.io/hostname:127.0.0.1] map[]} { 127.0.0.1 false} {map[cpu:{4.000 DecimalSI} memory:{8048775168.000 BinarySI} pods:{40.000 DecimalSI}] [{Ready True 2015-10-08 16:04:26 -0700 PDT 2015-10-08 14:54:47 -0700 PDT KubeletReady kubelet is posting ready status}] [{LegacyHostIP 127.0.0.1} {InternalIP 127.0.0.1}] {{10250}} {813e56ef1c4f171bda95b46b5448007c 7126C581-5370-11CB-A905-84466A98BB46 5d6e4778-5d0b-4df2-8dc6-063d8a2b4c3b 3.19.0-30-generic Ubuntu 15.04 docker://1.8.0-dev v1.2.0-alpha.1.589+081615b38dd04f-dirty v1.2.0-alpha.1.589+081615b38dd04f-dirty}}}
STEP: Starting additional 40 Pods to fully saturate the cluster max pods and trying to start another one
Oct 8 16:04:36.516: INFO: 40 pods running
Oct 8 16:04:41.519: INFO: Sleeping 10 seconds and crossing our fingers that scheduler will run in that time.
STEP: Removing all pods in namespace e2e-tests-sched-pred-ee5yc
[AfterEach] SchedulerPredicates
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:163
Oct 8 16:04:58.507: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:04:58.709: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:04:58.709: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-sched-pred-ee5yc" for this suite.
• [SLOW TEST:41.804 seconds]
SchedulerPredicates
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:433
validates MaxPods limit number of pods that are allowed to run.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:229
------------------------------
P [PENDING]
Namespaces
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/namespace.go:120
should always delete fast (ALL of 100 namespaces in 150 seconds)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/namespace.go:119
------------------------------
S
------------------------------
Pod Disks
should schedule a pod w/ a RW PD, remove it, then schedule it on another host
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pd.go:124
[BeforeEach] Pod Disks
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:05:09.113: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pod-disks-yrfdm
Oct 8 16:05:09.114: INFO: Get service account default in ns e2e-tests-pod-disks-yrfdm failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:05:11.116: INFO: Service account default in ns e2e-tests-pod-disks-yrfdm with secrets found. (2.002464305s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:05:11.116: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pod-disks-yrfdm
Oct 8 16:05:11.117: INFO: Service account default in ns e2e-tests-pod-disks-yrfdm with secrets found. (1.118971ms)
[BeforeEach] Pod Disks
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pd.go:66
[AfterEach] Pod Disks
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 16:05:11.121: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:05:11.123: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:05:11.123: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-pod-disks-yrfdm" for this suite.
S [SKIPPING] in Spec Setup (BeforeEach) [7.019 seconds]
Pod Disks
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pd.go:297
should schedule a pod w/ a RW PD, remove it, then schedule it on another host [BeforeEach]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pd.go:124
Oct 8 16:05:11.117: Requires at least 2 nodes (not -1)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:211
------------------------------
S
------------------------------
Services
should be able to change the type and nodeport settings of a service
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:584
[BeforeEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:05:16.132: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-6kk0w
Oct 8 16:05:16.133: INFO: Get service account default in ns e2e-tests-services-6kk0w failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:05:18.135: INFO: Service account default in ns e2e-tests-services-6kk0w with secrets found. (2.002363915s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:05:18.135: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-6kk0w
Oct 8 16:05:18.136: INFO: Service account default in ns e2e-tests-services-6kk0w with secrets found. (1.233629ms)
[BeforeEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:54
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
[It] should be able to change the type and nodeport settings of a service
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:584
[AfterEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 16:05:18.140: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:05:18.142: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:05:18.142: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-services-6kk0w" for this suite.
[AfterEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:64
S [SKIPPING] [7.020 seconds]
Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:861
should be able to change the type and nodeport settings of a service [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:584
Oct 8 16:05:18.136: Only supported for providers [gce gke aws] (not local)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:211
------------------------------
Job
should scale a job up
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:132
[BeforeEach] Job
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:05:23.153: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-job-e92gs
Oct 8 16:05:23.154: INFO: Get service account default in ns e2e-tests-job-e92gs failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:05:25.156: INFO: Service account default in ns e2e-tests-job-e92gs with secrets found. (2.002774465s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:05:25.156: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-job-e92gs
Oct 8 16:05:25.157: INFO: Service account default in ns e2e-tests-job-e92gs with secrets found. (1.202151ms)
[It] should scale a job up
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:132
STEP: Creating a job
[AfterEach] Job
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-job-e92gs".
Oct 8 16:05:25.162: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 8 16:05:25.162: INFO:
Oct 8 16:05:25.162: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:05:25.164: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:05:25.164: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-job-e92gs" for this suite.
• Failure [7.022 seconds]
Job
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:183
should scale a job up [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:132
Expected error:
<*errors.StatusError | 0xc20861a400>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:115
------------------------------
Pod Disks
should schedule a pod w/ a readonly PD on two hosts, then remove both.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pd.go:179
[BeforeEach] Pod Disks
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:05:30.174: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pod-disks-vb72r
Oct 8 16:05:30.175: INFO: Get service account default in ns e2e-tests-pod-disks-vb72r failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:05:32.177: INFO: Service account default in ns e2e-tests-pod-disks-vb72r with secrets found. (2.002964071s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:05:32.177: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pod-disks-vb72r
Oct 8 16:05:32.178: INFO: Service account default in ns e2e-tests-pod-disks-vb72r with secrets found. (1.259984ms)
[BeforeEach] Pod Disks
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pd.go:66
[AfterEach] Pod Disks
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 16:05:32.182: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:05:32.184: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:05:32.184: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-pod-disks-vb72r" for this suite.
S [SKIPPING] in Spec Setup (BeforeEach) [7.020 seconds]
Pod Disks
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pd.go:297
should schedule a pod w/ a readonly PD on two hosts, then remove both. [BeforeEach]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pd.go:179
Oct 8 16:05:32.178: Requires at least 2 nodes (not -1)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:211
------------------------------
Kubectl client Kubectl patch
should add annotations for pods in rc
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:524
[BeforeEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:77
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
Oct 8 16:05:37.194: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-ekvb2
Oct 8 16:05:37.195: INFO: Get service account default in ns e2e-tests-kubectl-ekvb2 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:05:39.196: INFO: Service account default in ns e2e-tests-kubectl-ekvb2 with secrets found. (2.002140834s)
[It] should add annotations for pods in rc
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:524
STEP: creating Redis RC
Oct 8 16:05:39.196: INFO: Running '/home/yifan/google-cloud-sdk/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth create -f /home/yifan/gopher/src/github.com/coreos/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-ekvb2'
[AfterEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:84
STEP: Destroying namespace for this suite e2e-tests-kubectl-ekvb2
• Failure [7.021 seconds]
Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:682
Kubectl patch
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:525
should add annotations for pods in rc [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:524
Oct 8 16:05:39.204: Error running &{/home/yifan/google-cloud-sdk/bin/kubectl [kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth create -f /home/yifan/gopher/src/github.com/coreos/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-ekvb2] [] <nil> the path "/home/yifan/gopher/src/github.com/coreos/kubernetes/examples/guestbook-go/redis-master-controller.json" does not exist
[] <nil> 0xc20891e3e0 exit status 1 <nil> true [0xc20824d260 0xc20824d288 0xc20824d2b8] [0xc20824d260 0xc20824d288 0xc20824d2b8] [0xc20824d280 0xc20824d2b0] [0x6bd870 0x6bd870] 0xc208c40840}:
Command stdout:
stderr:
the path "/home/yifan/gopher/src/github.com/coreos/kubernetes/examples/guestbook-go/redis-master-controller.json" does not exist
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1046
------------------------------
SS
------------------------------
Pods
should contain environment variables for services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:464
[BeforeEach] Pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:05:44.216: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-1et7p
Oct 8 16:05:44.218: INFO: Get service account default in ns e2e-tests-pods-1et7p failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:05:46.219: INFO: Service account default in ns e2e-tests-pods-1et7p with secrets found. (2.002848091s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:05:46.219: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-1et7p
Oct 8 16:05:46.220: INFO: Service account default in ns e2e-tests-pods-1et7p with secrets found. (920.635µs)
[It] should contain environment variables for services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:464
Oct 8 16:05:46.222: INFO: Waiting up to 5m0s for pod server-envvars-1ba8a6ad-6e11-11e5-bcd2-28d244b00276 status to be running
Oct 8 16:05:46.227: INFO: Waiting for pod server-envvars-1ba8a6ad-6e11-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-pods-1et7p' status to be 'running'(found phase: "Pending", readiness: false) (4.323092ms elapsed)
Oct 8 16:05:48.229: INFO: Waiting for pod server-envvars-1ba8a6ad-6e11-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-pods-1et7p' status to be 'running'(found phase: "Pending", readiness: false) (2.006792384s elapsed)
Oct 8 16:05:50.231: INFO: Waiting for pod server-envvars-1ba8a6ad-6e11-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-pods-1et7p' status to be 'running'(found phase: "Pending", readiness: false) (4.008373519s elapsed)
Oct 8 16:05:52.233: INFO: Waiting for pod server-envvars-1ba8a6ad-6e11-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-pods-1et7p' status to be 'running'(found phase: "Pending", readiness: false) (6.010196004s elapsed)
Oct 8 16:05:54.234: INFO: Waiting for pod server-envvars-1ba8a6ad-6e11-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-pods-1et7p' status to be 'running'(found phase: "Pending", readiness: false) (8.011839173s elapsed)
Oct 8 16:05:56.237: INFO: Waiting for pod server-envvars-1ba8a6ad-6e11-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-pods-1et7p' status to be 'running'(found phase: "Pending", readiness: false) (10.014773698s elapsed)
Oct 8 16:05:58.239: INFO: Found pod 'server-envvars-1ba8a6ad-6e11-11e5-bcd2-28d244b00276' on node '127.0.0.1'
STEP: Creating a pod to test service env
Oct 8 16:05:58.330: INFO: Waiting up to 5m0s for pod client-envvars-22d9aed1-6e11-11e5-bcd2-28d244b00276 status to be success or failure
Oct 8 16:05:58.333: INFO: No Status.Info for container 'env3cont' in pod 'client-envvars-22d9aed1-6e11-11e5-bcd2-28d244b00276' yet
Oct 8 16:05:58.333: INFO: Waiting for pod client-envvars-22d9aed1-6e11-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-pods-1et7p' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.134632ms elapsed)
Oct 8 16:06:00.334: INFO: No Status.Info for container 'env3cont' in pod 'client-envvars-22d9aed1-6e11-11e5-bcd2-28d244b00276' yet
Oct 8 16:06:00.334: INFO: Waiting for pod client-envvars-22d9aed1-6e11-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-pods-1et7p' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.003800697s elapsed)
Oct 8 16:06:02.336: INFO: No Status.Info for container 'env3cont' in pod 'client-envvars-22d9aed1-6e11-11e5-bcd2-28d244b00276' yet
Oct 8 16:06:02.336: INFO: Waiting for pod client-envvars-22d9aed1-6e11-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-pods-1et7p' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.005411486s elapsed)
Oct 8 16:06:04.338: INFO: Nil State.Terminated for container 'env3cont' in pod 'client-envvars-22d9aed1-6e11-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-pods-1et7p' so far
Oct 8 16:06:04.338: INFO: Waiting for pod client-envvars-22d9aed1-6e11-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-pods-1et7p' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.007209953s elapsed)
Oct 8 16:06:06.339: INFO: Nil State.Terminated for container 'env3cont' in pod 'client-envvars-22d9aed1-6e11-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-pods-1et7p' so far
Oct 8 16:06:06.339: INFO: Waiting for pod client-envvars-22d9aed1-6e11-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-pods-1et7p' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.008953264s elapsed)
Oct 8 16:06:08.342: INFO: Nil State.Terminated for container 'env3cont' in pod 'client-envvars-22d9aed1-6e11-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-pods-1et7p' so far
Oct 8 16:06:08.342: INFO: Waiting for pod client-envvars-22d9aed1-6e11-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-pods-1et7p' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.011816835s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod client-envvars-22d9aed1-6e11-11e5-bcd2-28d244b00276 container env3cont: <nil>
STEP: Successfully fetched pod logs:KUBERNETES_PORT=tcp://10.0.0.1:443
KUBERNETES_SERVICE_PORT=443
FOOSERVICE_PORT_8765_TCP_PORT=8765
USER=root
FOOSERVICE_PORT_8765_TCP_PROTO=tcp
AC_APP_NAME=env3cont
SHLVL=1
HOME=/root
FOOSERVICE_PORT_8765_TCP=tcp://10.0.0.10:8765
LOGNAME=root
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
SHELL=/bin/sh
FOOSERVICE_SERVICE_HOST=10.0.0.10
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443
PWD=/
KUBERNETES_SERVICE_HOST=10.0.0.1
FOOSERVICE_PORT=tcp://10.0.0.10:8765
FOOSERVICE_SERVICE_PORT=8765
AC_METADATA_URL=
FOOSERVICE_PORT_8765_TCP_ADDR=10.0.0.10
[AfterEach] Pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 16:06:10.690: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:06:10.692: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:06:10.692: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-1et7p" for this suite.
• [SLOW TEST:31.572 seconds]
Pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:778
should contain environment variables for services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:464
------------------------------
Reboot
each node by ordering unclean reboot and ensure they function upon restart
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/reboot.go:71
[BeforeEach] Reboot
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/reboot.go:59
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
S [SKIPPING] in Spec Setup (BeforeEach) [0.005 seconds]
Reboot
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/reboot.go:100
each node by ordering unclean reboot and ensure they function upon restart [BeforeEach]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/reboot.go:71
Oct 8 16:06:15.785: Only supported for providers [gce gke aws] (not local)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:211
------------------------------
SchedulerPredicates
validates that NodeSelector is respected if matching.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:432
[BeforeEach] SchedulerPredicates
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:153
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:06:15.793: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-sched-pred-6e0e6
Oct 8 16:06:15.794: INFO: Get service account default in ns e2e-tests-sched-pred-6e0e6 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:06:17.796: INFO: Service account default in ns e2e-tests-sched-pred-6e0e6 with secrets found. (2.002313405s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:06:17.796: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-sched-pred-6e0e6
Oct 8 16:06:17.797: INFO: Service account default in ns e2e-tests-sched-pred-6e0e6 with secrets found. (1.232567ms)
[It] validates that NodeSelector is respected if matching.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:432
STEP: Trying to launch a pod without a label to get a node which can launch it.
Oct 8 16:06:17.802: INFO: Waiting up to 5m0s for pod without-label status to be running
Oct 8 16:06:17.804: INFO: Waiting for pod without-label in namespace 'e2e-tests-sched-pred-6e0e6' status to be 'running'(found phase: "Pending", readiness: false) (1.420907ms elapsed)
Oct 8 16:06:19.805: INFO: Waiting for pod without-label in namespace 'e2e-tests-sched-pred-6e0e6' status to be 'running'(found phase: "Pending", readiness: false) (2.002943514s elapsed)
Oct 8 16:06:21.807: INFO: Waiting for pod without-label in namespace 'e2e-tests-sched-pred-6e0e6' status to be 'running'(found phase: "Pending", readiness: false) (4.004385797s elapsed)
Oct 8 16:06:23.808: INFO: Waiting for pod without-label in namespace 'e2e-tests-sched-pred-6e0e6' status to be 'running'(found phase: "Pending", readiness: false) (6.006050715s elapsed)
Oct 8 16:06:25.810: INFO: Waiting for pod without-label in namespace 'e2e-tests-sched-pred-6e0e6' status to be 'running'(found phase: "Pending", readiness: false) (8.007910921s elapsed)
Oct 8 16:06:27.812: INFO: Waiting for pod without-label in namespace 'e2e-tests-sched-pred-6e0e6' status to be 'running'(found phase: "Pending", readiness: false) (10.009766149s elapsed)
Oct 8 16:06:29.814: INFO: Found pod 'without-label' on node '127.0.0.1'
STEP: Trying to apply a random label on the found node.
STEP: Trying to relaunch the pod, now with labels.
Oct 8 16:06:30.023: INFO: Waiting up to 5m0s for pod with-labels status to be running
Oct 8 16:06:30.026: INFO: Waiting for pod with-labels in namespace 'e2e-tests-sched-pred-6e0e6' status to be 'running'(found phase: "Pending", readiness: false) (2.330971ms elapsed)
Oct 8 16:06:32.028: INFO: Waiting for pod with-labels in namespace 'e2e-tests-sched-pred-6e0e6' status to be 'running'(found phase: "Pending", readiness: false) (2.003685796s elapsed)
Oct 8 16:06:34.029: INFO: Waiting for pod with-labels in namespace 'e2e-tests-sched-pred-6e0e6' status to be 'running'(found phase: "Pending", readiness: false) (4.005283503s elapsed)
Oct 8 16:06:36.031: INFO: Waiting for pod with-labels in namespace 'e2e-tests-sched-pred-6e0e6' status to be 'running'(found phase: "Pending", readiness: false) (6.00701835s elapsed)
Oct 8 16:06:38.033: INFO: Waiting for pod with-labels in namespace 'e2e-tests-sched-pred-6e0e6' status to be 'running'(found phase: "Pending", readiness: false) (8.008770815s elapsed)
Oct 8 16:06:40.035: INFO: Waiting for pod with-labels in namespace 'e2e-tests-sched-pred-6e0e6' status to be 'running'(found phase: "Pending", readiness: false) (10.010475614s elapsed)
Oct 8 16:06:42.036: INFO: Found pod 'with-labels' on node '127.0.0.1'
[AfterEach] SchedulerPredicates
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:163
Oct 8 16:06:42.111: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:06:42.112: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:06:42.112: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-sched-pred-6e0e6" for this suite.
• [SLOW TEST:31.416 seconds]
SchedulerPredicates
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:433
validates that NodeSelector is respected if matching.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:432
------------------------------
Job
should run a job to completion when tasks sometimes fail and are not locally restarted
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:89
[BeforeEach] Job
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:06:47.209: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-job-pmbf2
Oct 8 16:06:47.210: INFO: Get service account default in ns e2e-tests-job-pmbf2 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:06:49.213: INFO: Service account default in ns e2e-tests-job-pmbf2 with secrets found. (2.003855647s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:06:49.213: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-job-pmbf2
Oct 8 16:06:49.215: INFO: Service account default in ns e2e-tests-job-pmbf2 with secrets found. (1.970282ms)
[It] should run a job to completion when tasks sometimes fail and are not locally restarted
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:89
STEP: Creating a job
[AfterEach] Job
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-job-pmbf2".
Oct 8 16:06:49.224: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 8 16:06:49.224: INFO:
Oct 8 16:06:49.224: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:06:49.226: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:06:49.226: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-job-pmbf2" for this suite.
• Failure [7.026 seconds]
Job
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:183
should run a job to completion when tasks sometimes fail and are not locally restarted [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:89
Expected error:
<*errors.StatusError | 0xc208788c80>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:84
------------------------------
SS
------------------------------
Services
should be able to create a functioning NodePort service
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:402
[BeforeEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:06:54.235: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-6li79
Oct 8 16:06:54.236: INFO: Get service account default in ns e2e-tests-services-6li79 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:06:56.238: INFO: Service account default in ns e2e-tests-services-6li79 with secrets found. (2.002228266s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:06:56.238: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-6li79
Oct 8 16:06:56.239: INFO: Service account default in ns e2e-tests-services-6li79 with secrets found. (923.39µs)
[BeforeEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:54
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
[It] should be able to create a functioning NodePort service
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:402
STEP: creating service nodeportservice-test with type=NodePort in namespace e2e-tests-services-6li79
STEP: creating pod to be part of service nodeportservice-test
Oct 8 16:06:56.352: INFO: Pod name webserver: Found 0 pods out of 1
Oct 8 16:07:01.354: INFO: Pod name webserver: Found 1 pods out of 1
STEP: ensuring each pod is running
Oct 8 16:07:01.354: INFO: Waiting up to 5m0s for pod webserver-47dy9 status to be running
Oct 8 16:07:01.357: INFO: Waiting for pod webserver-47dy9 in namespace 'e2e-tests-services-6li79' status to be 'running'(found phase: "Pending", readiness: false) (2.290326ms elapsed)
Oct 8 16:07:03.360: INFO: Waiting for pod webserver-47dy9 in namespace 'e2e-tests-services-6li79' status to be 'running'(found phase: "Pending", readiness: false) (2.005931212s elapsed)
Oct 8 16:07:05.362: INFO: Waiting for pod webserver-47dy9 in namespace 'e2e-tests-services-6li79' status to be 'running'(found phase: "Pending", readiness: false) (4.007707992s elapsed)
Oct 8 16:07:07.364: INFO: Found pod 'webserver-47dy9' on node '127.0.0.1'
STEP: trying to dial each unique pod
Oct 8 16:07:07.372: INFO: Controller webserver: Got non-empty result from replica 1 [webserver-47dy9]: "<pre>\n<a href=\"test-webserver\">test-webserver</a>\n<a href=\"proc/\">proc/</a>\n<a href=\"var/\">var/</a>\n<a href=\"dev/\">dev/</a>\n<a href=\"sys/\">sys/</a>\n<a href=\"etc/\">etc/</a>\n<a href=\"tmp/\">tmp/</a>\n</pre>\n", 1 of 1 required successes so far
STEP: hitting the pod through the service's NodePort
STEP: Waiting up to 5m0s for the url http://127.0.0.1:31453 to be reachable
Oct 8 16:07:07.378: INFO: Successfully reached http://127.0.0.1:31453
STEP: deleting service nodeportservice-test in namespace e2e-tests-services-6li79
STEP: stopping RC webserver in namespace e2e-tests-services-6li79
[AfterEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 16:07:07.636: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:07:07.645: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:07:07.645: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-services-6li79" for this suite.
[AfterEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:64
• [SLOW TEST:18.477 seconds]
Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:861
should be able to create a functioning NodePort service
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:402
------------------------------
Nodes Resize
should be able to delete nodes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:446
[BeforeEach] Nodes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:395
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:07:12.713: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-resize-nodes-bgb51
Oct 8 16:07:12.714: INFO: Get service account default in ns e2e-tests-resize-nodes-bgb51 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:07:14.715: INFO: Service account default in ns e2e-tests-resize-nodes-bgb51 with secrets found. (2.002573085s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:07:14.715: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-resize-nodes-bgb51
Oct 8 16:07:14.717: INFO: Service account default in ns e2e-tests-resize-nodes-bgb51 with secrets found. (1.333249ms)
[BeforeEach] Resize
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:407
[AfterEach] Resize
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:424
[AfterEach] Nodes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:397
Oct 8 16:07:14.721: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:07:14.723: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:07:14.723: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-resize-nodes-bgb51" for this suite.
S [SKIPPING] in Spec Setup (BeforeEach) [7.021 seconds]
Nodes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:539
Resize
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:473
should be able to delete nodes [BeforeEach]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:446
Oct 8 16:07:14.717: Only supported for providers [gce gke aws] (not local)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:211
------------------------------
Cluster level logging using Elasticsearch
should check that logs from pods on all nodes are ingested into Elasticsearch
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/es_cluster_logging.go:46
[BeforeEach] Cluster level logging using Elasticsearch
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:07:19.733: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-es-logging-18mrn
Oct 8 16:07:19.735: INFO: Get service account default in ns e2e-tests-es-logging-18mrn failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:07:21.737: INFO: Service account default in ns e2e-tests-es-logging-18mrn with secrets found. (2.003100784s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:07:21.737: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-es-logging-18mrn
Oct 8 16:07:21.738: INFO: Service account default in ns e2e-tests-es-logging-18mrn with secrets found. (1.439646ms)
[BeforeEach] Cluster level logging using Elasticsearch
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/es_cluster_logging.go:42
[AfterEach] Cluster level logging using Elasticsearch
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 16:07:21.742: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:07:21.744: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:07:21.744: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-es-logging-18mrn" for this suite.
S [SKIPPING] in Spec Setup (BeforeEach) [7.021 seconds]
Cluster level logging using Elasticsearch
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/es_cluster_logging.go:47
should check that logs from pods on all nodes are ingested into Elasticsearch [BeforeEach]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/es_cluster_logging.go:46
Oct 8 16:07:21.738: Only supported for providers [gce] (not local)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:211
------------------------------
SS
------------------------------
Networking
should provide unchanging, static URL paths for kubernetes api services.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:103
[BeforeEach] Networking
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:07:26.758: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nettest-o6jbj
Oct 8 16:07:26.762: INFO: Get service account default in ns e2e-tests-nettest-o6jbj failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:07:28.765: INFO: Service account default in ns e2e-tests-nettest-o6jbj with secrets found. (2.007015909s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:07:28.765: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nettest-o6jbj
Oct 8 16:07:28.766: INFO: Service account default in ns e2e-tests-nettest-o6jbj with secrets found. (1.24392ms)
[BeforeEach] Networking
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:52
STEP: Executing a successful http request from the external internet
[It] should provide unchanging, static URL paths for kubernetes api services.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:103
STEP: testing: /validate
STEP: testing: /healthz
[AfterEach] Networking
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 16:07:28.934: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:07:28.936: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:07:28.936: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-nettest-o6jbj" for this suite.
• [SLOW TEST:7.193 seconds]
Networking
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:249
should provide unchanging, static URL paths for kubernetes api services.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:103
------------------------------
S
------------------------------
Events
should be sent by kubelets and the scheduler about pods scheduling and running
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/events.go:127
[BeforeEach] Events
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:07:33.950: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-events-jkiz9
Oct 8 16:07:33.951: INFO: Get service account default in ns e2e-tests-events-jkiz9 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:07:35.952: INFO: Service account default in ns e2e-tests-events-jkiz9 with secrets found. (2.002555517s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:07:35.952: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-events-jkiz9
Oct 8 16:07:35.953: INFO: Service account default in ns e2e-tests-events-jkiz9 with secrets found. (947.377µs)
[It] should be sent by kubelets and the scheduler about pods scheduling and running
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/events.go:127
STEP: creating the pod
STEP: submitting the pod to kubernetes
Oct 8 16:07:35.956: INFO: Waiting up to 5m0s for pod send-events-5d10a0c2-6e11-11e5-bcd2-28d244b00276 status to be running
Oct 8 16:07:35.958: INFO: Waiting for pod send-events-5d10a0c2-6e11-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-events-jkiz9' status to be 'running'(found phase: "Pending", readiness: false) (1.765414ms elapsed)
Oct 8 16:07:37.960: INFO: Waiting for pod send-events-5d10a0c2-6e11-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-events-jkiz9' status to be 'running'(found phase: "Pending", readiness: false) (2.004585719s elapsed)
Oct 8 16:07:39.971: INFO: Waiting for pod send-events-5d10a0c2-6e11-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-events-jkiz9' status to be 'running'(found phase: "Pending", readiness: false) (4.015222726s elapsed)
Oct 8 16:07:41.973: INFO: Waiting for pod send-events-5d10a0c2-6e11-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-events-jkiz9' status to be 'running'(found phase: "Pending", readiness: false) (6.01721061s elapsed)
Oct 8 16:07:43.975: INFO: Waiting for pod send-events-5d10a0c2-6e11-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-events-jkiz9' status to be 'running'(found phase: "Pending", readiness: false) (8.019429009s elapsed)
Oct 8 16:07:45.978: INFO: Waiting for pod send-events-5d10a0c2-6e11-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-events-jkiz9' status to be 'running'(found phase: "Pending", readiness: false) (10.021994537s elapsed)
Oct 8 16:07:47.980: INFO: Found pod 'send-events-5d10a0c2-6e11-11e5-bcd2-28d244b00276' on node '127.0.0.1'
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
&{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:send-events-5d10a0c2-6e11-11e5-bcd2-28d244b00276 GenerateName: Namespace:e2e-tests-events-jkiz9 SelfLink:/api/v1/namespaces/e2e-tests-events-jkiz9/pods/send-events-5d10a0c2-6e11-11e5-bcd2-28d244b00276 UID:5d10b823-6e11-11e5-956c-28d244b00276 ResourceVersion:4085 Generation:0 CreationTimestamp:2015-10-08 16:07:35 -0700 PDT DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:foo time:953846920] Annotations:map[]} Spec:{Volumes:[{Name:default-token-ooah1 VolumeSource:{HostPath:<nil> EmptyDir:<nil> GCEPersistentDisk:<nil> AWSElasticBlockStore:<nil> GitRepo:<nil> Secret:0xc208ac8a30 NFS:<nil> ISCSI:<nil> Glusterfs:<nil> PersistentVolumeClaim:<nil> RBD:<nil> Cinder:<nil> CephFS:<nil> Flocker:<nil> DownwardAPI:<nil> FC:<nil>}}] Containers:[{Name:p Image:gcr.io/google_containers/serve_hostname:1.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:80 Protocol:TCP HostIP:}] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-ooah1 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount}] LivenessProbe:<nil> ReadinessProbe:<nil> Lifecycle:<nil> TerminationMessagePath:/dev/termination-log ImagePullPolicy:IfNotPresent SecurityContext:<nil> Stdin:false TTY:false}] RestartPolicy:Always TerminationGracePeriodSeconds:0xc208ac8a60 ActiveDeadlineSeconds:<nil> DNSPolicy:ClusterFirst NodeSelector:map[] ServiceAccountName:default NodeName:127.0.0.1 SecurityContext:0xc208ac8a68 ImagePullSecrets:[]} Status:{Phase:Running Conditions:[{Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2015-10-08 16:07:46 -0700 PDT Reason: Message:}] Message: Reason: HostIP:127.0.0.1 PodIP:172.16.28.50 StartTime:2015-10-08 16:07:36 -0700 PDT ContainerStatuses:[{Name:p State:{Waiting:<nil> Running:0xc2088c17e0 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:true RestartCount:0 Image:gcr.io/google_containers/serve_hostname:1.1 ImageID: ContainerID:f7724fba-840b-456e-ba2e-f7cf911eb072:p}]}}
STEP: checking for scheduler event about the pod
Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] Events
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 16:07:51.991: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:07:51.993: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:07:51.993: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-events-jkiz9" for this suite.
• [SLOW TEST:23.104 seconds]
Events
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/events.go:128
should be sent by kubelets and the scheduler about pods scheduling and running
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/events.go:127
------------------------------
Kubelet experimental resource usage tracking
over 30m0s with 50 pods per node.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:140
[BeforeEach] Kubelet
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:07:57.052: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubelet-perf-qyy5z
Oct 8 16:07:57.054: INFO: Get service account default in ns e2e-tests-kubelet-perf-qyy5z failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:07:59.056: INFO: Service account default in ns e2e-tests-kubelet-perf-qyy5z with secrets found. (2.003079158s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:07:59.056: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubelet-perf-qyy5z
Oct 8 16:07:59.056: INFO: Service account default in ns e2e-tests-kubelet-perf-qyy5z with secrets found. (914.291µs)
[BeforeEach] Kubelet
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:115
[It] over 30m0s with 50 pods per node.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:140
STEP: Creating a RC of 50 pods and wait until all pods of this RC are running
STEP: creating replication controller resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 in namespace e2e-tests-kubelet-perf-qyy5z
Oct 8 16:07:59.064: INFO: Created replication controller with name: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276, namespace: e2e-tests-kubelet-perf-qyy5z, replica count: 50
Oct 8 16:08:09.064: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 0 running, 40 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:08:19.065: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 0 running, 40 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:08:29.065: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 0 running, 40 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:08:39.065: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 1 running, 39 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:08:49.065: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 3 running, 37 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:08:59.065: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 6 running, 34 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:09:09.066: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 8 running, 32 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:09:19.066: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 11 running, 29 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:09:29.066: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 15 running, 25 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:09:39.066: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 18 running, 22 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:09:49.084: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 21 running, 19 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:09:59.084: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 36 running, 4 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:10:09.085: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:10:19.085: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:10:29.119: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:10:39.119: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:10:49.119: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:10:59.119: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:11:09.119: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:11:19.120: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:11:29.128: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:11:39.128: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:11:49.129: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:11:59.129: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:12:09.129: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:12:19.129: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:12:29.129: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:12:39.130: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:12:49.130: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:12:59.130: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:13:09.130: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:13:19.130: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:13:29.131: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:13:39.131: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:13:49.131: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:13:59.131: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:14:09.132: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:14:19.132: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:14:29.132: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:14:39.132: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:14:49.132: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:14:59.133: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:15:09.173: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276 Pods: 50 out of 50 created, 40 running, 0 pending, 10 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:15:09.173: INFO: Pod resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-ivj8q still unassigned
Oct 8 16:15:09.173: INFO: Pod resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-jep6b still unassigned
Oct 8 16:15:09.173: INFO: Pod resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-cehum still unassigned
Oct 8 16:15:09.173: INFO: Pod resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-65dk9 still unassigned
Oct 8 16:15:09.173: INFO: Pod resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-8y1pr still unassigned
Oct 8 16:15:09.173: INFO: Pod resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-3fkcl still unassigned
Oct 8 16:15:09.173: INFO: Pod resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-cslz9 still unassigned
Oct 8 16:15:09.173: INFO: Pod resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-n5wfs still unassigned
Oct 8 16:15:09.173: INFO: Pod resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-jog0j still unassigned
Oct 8 16:15:09.173: INFO: Pod resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-v85hi still unassigned
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-0jtwj 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-1k2yz 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-1w58r 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-3fkcl <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-3ks45 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-4z4m5 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-65dk9 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-7judd 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-7toih 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-8y1pr <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-bjedc 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-c20od 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-cehum <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-cslz9 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-eun9b 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-f6wxz 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-fhwhb 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-fskyq 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-h9ft8 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-hlvc0 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-hzqce 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-icidu 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-ivj8q <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-j79cn 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-jep6b <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-jog0j <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-lpq6z 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-luosh 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-mrq87 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-mzaje 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-n5wfs <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-nneuh 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-nwm3s 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-ny4q3 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-oex28 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-ozz6f 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-proq6 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-ql1gx 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-sqif5 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-sttvi 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-to07r 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-v85hi <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-w813n 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-we0qf 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-whutc 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-y6lek 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-yodtw 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-z7car 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-zgrp2 127.0.0.1 <nil>
Oct 8 16:15:09.204: INFO: Pod e2e-tests-kubelet-perf-qyy5z resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-zs47k 127.0.0.1 <nil>
[AfterEach] Kubelet
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-kubelet-perf-qyy5z".
Oct 8 16:15:09.283: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-0jtwj: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-0jtwj to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-0jtwj: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-0jtwj: {kubelet 127.0.0.1} Created: Created with rkt id 0d795f7a
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-0jtwj: {kubelet 127.0.0.1} Started: Started with rkt id 0d795f7a
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-1k2yz: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-1k2yz to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-1k2yz: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-1k2yz: {kubelet 127.0.0.1} Created: Created with rkt id 80cba049
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-1k2yz: {kubelet 127.0.0.1} Started: Started with rkt id 80cba049
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-1w58r: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-1w58r to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-1w58r: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-1w58r: {kubelet 127.0.0.1} Created: Created with rkt id a4d80439
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-1w58r: {kubelet 127.0.0.1} Started: Started with rkt id a4d80439
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-3fkcl: {scheduler } FailedScheduling: Failed for reason PodFitsResources and possibly others
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-3ks45: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-3ks45 to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-3ks45: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-3ks45: {kubelet 127.0.0.1} Created: Created with rkt id 65c24c6b
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-3ks45: {kubelet 127.0.0.1} Started: Started with rkt id 65c24c6b
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-4z4m5: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-4z4m5 to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-4z4m5: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-4z4m5: {kubelet 127.0.0.1} Created: Created with rkt id bd9d38bc
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-4z4m5: {kubelet 127.0.0.1} Started: Started with rkt id bd9d38bc
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-65dk9: {scheduler } FailedScheduling: Failed for reason PodFitsResources and possibly others
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-7judd: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-7judd to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-7judd: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-7judd: {kubelet 127.0.0.1} Created: Created with rkt id fe597e34
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-7judd: {kubelet 127.0.0.1} Started: Started with rkt id fe597e34
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-7toih: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-7toih to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-7toih: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-7toih: {kubelet 127.0.0.1} Created: Created with rkt id 2f0cc78a
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-7toih: {kubelet 127.0.0.1} Started: Started with rkt id 2f0cc78a
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-8y1pr: {scheduler } FailedScheduling: Failed for reason PodFitsResources and possibly others
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-bjedc: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-bjedc to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-bjedc: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-bjedc: {kubelet 127.0.0.1} Created: Created with rkt id fe128b16
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-bjedc: {kubelet 127.0.0.1} Started: Started with rkt id fe128b16
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-c20od: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-c20od to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-c20od: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-c20od: {kubelet 127.0.0.1} Created: Created with rkt id d761eaea
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-c20od: {kubelet 127.0.0.1} Started: Started with rkt id d761eaea
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-cehum: {scheduler } FailedScheduling: Failed for reason PodFitsResources and possibly others
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-cslz9: {scheduler } FailedScheduling: Failed for reason PodFitsResources and possibly others
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-eun9b: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-eun9b to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-eun9b: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-eun9b: {kubelet 127.0.0.1} Created: Created with rkt id 3e1dc1e0
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-eun9b: {kubelet 127.0.0.1} Started: Started with rkt id 3e1dc1e0
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-f6wxz: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-f6wxz to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-f6wxz: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-f6wxz: {kubelet 127.0.0.1} Created: Created with rkt id 8260f43f
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-f6wxz: {kubelet 127.0.0.1} Started: Started with rkt id 8260f43f
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-fhwhb: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-fhwhb to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-fhwhb: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-fhwhb: {kubelet 127.0.0.1} Created: Created with rkt id 4a2d9099
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-fhwhb: {kubelet 127.0.0.1} Started: Started with rkt id 4a2d9099
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-fskyq: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-fskyq to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-fskyq: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-fskyq: {kubelet 127.0.0.1} Created: Created with rkt id 23c75637
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-fskyq: {kubelet 127.0.0.1} Started: Started with rkt id 23c75637
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-h9ft8: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-h9ft8 to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-h9ft8: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-h9ft8: {kubelet 127.0.0.1} Created: Created with rkt id c7b93722
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-h9ft8: {kubelet 127.0.0.1} Started: Started with rkt id c7b93722
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-hlvc0: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-hlvc0 to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-hlvc0: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-hlvc0: {kubelet 127.0.0.1} Created: Created with rkt id 1fb681ce
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-hlvc0: {kubelet 127.0.0.1} Started: Started with rkt id 1fb681ce
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-hzqce: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-hzqce to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-hzqce: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-hzqce: {kubelet 127.0.0.1} Created: Created with rkt id 37e85298
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-hzqce: {kubelet 127.0.0.1} Started: Started with rkt id 37e85298
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-icidu: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-icidu to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-icidu: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-icidu: {kubelet 127.0.0.1} Created: Created with rkt id b38309a7
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-icidu: {kubelet 127.0.0.1} Started: Started with rkt id b38309a7
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-ivj8q: {scheduler } FailedScheduling: Failed for reason PodFitsResources and possibly others
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-j79cn: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-j79cn to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-j79cn: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-j79cn: {kubelet 127.0.0.1} Created: Created with rkt id 46cc5492
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-j79cn: {kubelet 127.0.0.1} Started: Started with rkt id 46cc5492
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-jep6b: {scheduler } FailedScheduling: Failed for reason PodFitsResources and possibly others
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-jog0j: {scheduler } FailedScheduling: Failed for reason PodFitsResources and possibly others
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-lpq6z: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-lpq6z to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-lpq6z: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-lpq6z: {kubelet 127.0.0.1} Created: Created with rkt id b1dd2e8b
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-lpq6z: {kubelet 127.0.0.1} Started: Started with rkt id b1dd2e8b
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-luosh: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-luosh to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-luosh: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-luosh: {kubelet 127.0.0.1} Created: Created with rkt id 8cc7b306
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-luosh: {kubelet 127.0.0.1} Started: Started with rkt id 8cc7b306
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-mrq87: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-mrq87 to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-mrq87: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-mrq87: {kubelet 127.0.0.1} Created: Created with rkt id 0a60ff25
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-mrq87: {kubelet 127.0.0.1} Started: Started with rkt id 0a60ff25
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-mzaje: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-mzaje to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-mzaje: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-mzaje: {kubelet 127.0.0.1} Created: Created with rkt id a63a7650
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-mzaje: {kubelet 127.0.0.1} Started: Started with rkt id a63a7650
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-n5wfs: {scheduler } FailedScheduling: Failed for reason PodFitsResources and possibly others
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-nneuh: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-nneuh to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-nneuh: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-nneuh: {kubelet 127.0.0.1} Created: Created with rkt id 6ab2e260
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-nneuh: {kubelet 127.0.0.1} Started: Started with rkt id 6ab2e260
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-nwm3s: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-nwm3s to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-nwm3s: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-nwm3s: {kubelet 127.0.0.1} Created: Created with rkt id 3dd2b154
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-nwm3s: {kubelet 127.0.0.1} Started: Started with rkt id 3dd2b154
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-ny4q3: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-ny4q3 to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-ny4q3: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-ny4q3: {kubelet 127.0.0.1} Created: Created with rkt id 872838c5
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-ny4q3: {kubelet 127.0.0.1} Started: Started with rkt id 872838c5
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-oex28: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-oex28 to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-oex28: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-oex28: {kubelet 127.0.0.1} Created: Created with rkt id 2d7ccd52
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-oex28: {kubelet 127.0.0.1} Started: Started with rkt id 2d7ccd52
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-ozz6f: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-ozz6f to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-ozz6f: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-ozz6f: {kubelet 127.0.0.1} Created: Created with rkt id 78eef239
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-ozz6f: {kubelet 127.0.0.1} Started: Started with rkt id 78eef239
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-proq6: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-proq6 to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-proq6: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-proq6: {kubelet 127.0.0.1} Created: Created with rkt id 3b05dcb3
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-proq6: {kubelet 127.0.0.1} Started: Started with rkt id 3b05dcb3
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-ql1gx: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-ql1gx to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-ql1gx: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-ql1gx: {kubelet 127.0.0.1} Created: Created with rkt id 442cb375
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-ql1gx: {kubelet 127.0.0.1} Started: Started with rkt id 442cb375
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-sqif5: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-sqif5 to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-sqif5: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-sqif5: {kubelet 127.0.0.1} Created: Created with rkt id a80759d8
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-sqif5: {kubelet 127.0.0.1} Started: Started with rkt id a80759d8
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-sttvi: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-sttvi to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-sttvi: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-sttvi: {kubelet 127.0.0.1} Created: Created with rkt id b70106fc
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-sttvi: {kubelet 127.0.0.1} Started: Started with rkt id b70106fc
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-to07r: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-to07r to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-to07r: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-to07r: {kubelet 127.0.0.1} Created: Created with rkt id 2fe2fcb5
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-to07r: {kubelet 127.0.0.1} Started: Started with rkt id 2fe2fcb5
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-v85hi: {scheduler } FailedScheduling: Failed for reason PodFitsResources and possibly others
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-w813n: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-w813n to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-w813n: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-w813n: {kubelet 127.0.0.1} Created: Created with rkt id 6647d3cc
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-w813n: {kubelet 127.0.0.1} Started: Started with rkt id 6647d3cc
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-we0qf: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-we0qf to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-we0qf: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-we0qf: {kubelet 127.0.0.1} Created: Created with rkt id a986c9dd
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-we0qf: {kubelet 127.0.0.1} Started: Started with rkt id a986c9dd
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-whutc: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-whutc to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-whutc: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-whutc: {kubelet 127.0.0.1} Created: Created with rkt id 83ec4656
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-whutc: {kubelet 127.0.0.1} Started: Started with rkt id 83ec4656
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-y6lek: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-y6lek to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-y6lek: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-y6lek: {kubelet 127.0.0.1} Created: Created with rkt id b74068b8
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-y6lek: {kubelet 127.0.0.1} Started: Started with rkt id b74068b8
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-yodtw: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-yodtw to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-yodtw: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-yodtw: {kubelet 127.0.0.1} Created: Created with rkt id cfa4091d
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-yodtw: {kubelet 127.0.0.1} Started: Started with rkt id cfa4091d
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-z7car: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-z7car to 127.0.0.1
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-z7car: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.284: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-z7car: {kubelet 127.0.0.1} Created: Created with rkt id 1118b563
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-z7car: {kubelet 127.0.0.1} Started: Started with rkt id 1118b563
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-zgrp2: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-zgrp2 to 127.0.0.1
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-zgrp2: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-zgrp2: {kubelet 127.0.0.1} Created: Created with rkt id 042c756f
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-zgrp2: {kubelet 127.0.0.1} Started: Started with rkt id 042c756f
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-zs47k: {scheduler } Scheduled: Successfully assigned resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-zs47k to 127.0.0.1
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-zs47k: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/pause:go" already present on machine
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-zs47k: {kubelet 127.0.0.1} Created: Created with rkt id 6c04b151
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-zs47k: {kubelet 127.0.0.1} Started: Started with rkt id 6c04b151
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-eun9b
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-oex28
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-c20od
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-3ks45
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-bjedc
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-mrq87
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-nneuh
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-icidu
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-ql1gx
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-7toih
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-fskyq
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-mzaje
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-we0qf
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-nwm3s
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-sttvi
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-to07r
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-hzqce
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-y6lek
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-zs47k
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-yodtw
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-w813n
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-fhwhb
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-0jtwj
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-proq6
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-lpq6z
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-zgrp2
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-whutc
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-z7car
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-1k2yz
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-hlvc0
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-ozz6f
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-ny4q3
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-f6wxz
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-1w58r
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-luosh
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-sqif5
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-h9ft8
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-7judd
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-j79cn
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-4z4m5
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-jep6b
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-cslz9
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-jog0j
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-cehum
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-8y1pr
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-65dk9
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-n5wfs
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-v85hi
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-ivj8q
Oct 8 16:15:09.285: INFO: event for resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-3fkcl
Oct 8 16:15:09.306: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 8 16:15:09.306: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-0jtwj 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:05 -0700 PDT }]
Oct 8 16:15:09.306: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-1k2yz 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:39 -0700 PDT }]
Oct 8 16:15:09.306: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-1w58r 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:45 -0700 PDT }]
Oct 8 16:15:09.306: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-3fkcl Pending []
Oct 8 16:15:09.306: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-3ks45 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:23 -0700 PDT }]
Oct 8 16:15:09.306: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-4z4m5 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:45 -0700 PDT }]
Oct 8 16:15:09.306: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-65dk9 Pending []
Oct 8 16:15:09.306: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-7judd 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:44 -0700 PDT }]
Oct 8 16:15:09.306: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-7toih 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:44 -0700 PDT }]
Oct 8 16:15:09.306: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-8y1pr Pending []
Oct 8 16:15:09.306: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-bjedc 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:38 -0700 PDT }]
Oct 8 16:15:09.306: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-c20od 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:10 -0700 PDT }]
Oct 8 16:15:09.306: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-cehum Pending []
Oct 8 16:15:09.306: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-cslz9 Pending []
Oct 8 16:15:09.306: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-eun9b 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:08:56 -0700 PDT }]
Oct 8 16:15:09.306: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-f6wxz 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:20 -0700 PDT }]
Oct 8 16:15:09.306: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-fhwhb 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:39 -0700 PDT }]
Oct 8 16:15:09.306: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-fskyq 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:49 -0700 PDT }]
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-h9ft8 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:37 -0700 PDT }]
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-hlvc0 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:16 -0700 PDT }]
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-hzqce 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:42 -0700 PDT }]
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-icidu 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:46 -0700 PDT }]
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-ivj8q Pending []
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-j79cn 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:35 -0700 PDT }]
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-jep6b Pending []
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-jog0j Pending []
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-lpq6z 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:17 -0700 PDT }]
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-luosh 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:45 -0700 PDT }]
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-mrq87 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:42 -0700 PDT }]
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-mzaje 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:38 -0700 PDT }]
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-n5wfs Pending []
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-nneuh 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:43 -0700 PDT }]
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-nwm3s 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:08:37 -0700 PDT }]
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-ny4q3 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:50 -0700 PDT }]
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-oex28 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:45 -0700 PDT }]
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-ozz6f 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:43 -0700 PDT }]
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-proq6 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:08:54 -0700 PDT }]
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-ql1gx 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:45 -0700 PDT }]
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-sqif5 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:45 -0700 PDT }]
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-sttvi 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:46 -0700 PDT }]
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-to07r 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:47 -0700 PDT }]
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-v85hi Pending []
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-w813n 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:21 -0700 PDT }]
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-we0qf 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:49 -0700 PDT }]
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-whutc 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:00 -0700 PDT }]
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-y6lek 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:08:42 -0700 PDT }]
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-yodtw 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:22 -0700 PDT }]
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-z7car 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:08:42 -0700 PDT }]
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-zgrp2 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:08:56 -0700 PDT }]
Oct 8 16:15:09.307: INFO: resource50-6ad64d07-6e11-11e5-bcd2-28d244b00276-zs47k 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:09:41 -0700 PDT }]
Oct 8 16:15:09.307: INFO:
Oct 8 16:15:09.307: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:15:09.314: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:15:09.314: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-kubelet-perf-qyy5z" for this suite.
[AfterEach] Kubelet
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:119
• Failure [452.276 seconds]
Kubelet
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:143
experimental resource usage tracking
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:142
over 30m0s with 50 pods per node. [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:140
Expected error:
<*errors.errorString | 0xc208586220>: {
s: "Only 40 pods started out of 50",
}
Only 40 pods started out of 50
not to have occurred
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:66
------------------------------
Kubectl client Simple pod
should support inline execution and attach
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:196
[BeforeEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:77
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
Oct 8 16:15:29.328: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-yqqf4
Oct 8 16:15:29.329: INFO: Get service account default in ns e2e-tests-kubectl-yqqf4 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:15:31.330: INFO: Service account default in ns e2e-tests-kubectl-yqqf4 with secrets found. (2.002179274s)
[BeforeEach] Simple pod
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:153
STEP: creating the pod
Oct 8 16:15:31.330: INFO: Running '/home/yifan/google-cloud-sdk/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth create -f /home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/pod.yaml --namespace=e2e-tests-kubectl-yqqf4'
[AfterEach] Simple pod
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:156
STEP: using delete to clean up resources
Oct 8 16:15:31.354: INFO: Running '/home/yifan/google-cloud-sdk/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth stop --grace-period=0 -f /home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/pod.yaml --namespace=e2e-tests-kubectl-yqqf4'
[AfterEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:84
STEP: Destroying namespace for this suite e2e-tests-kubectl-yqqf4
• Failure in Spec Setup (BeforeEach) [7.069 seconds]
Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:682
Simple pod
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:214
should support inline execution and attach [BeforeEach]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:196
Oct 8 16:15:31.350: Error running &{/home/yifan/google-cloud-sdk/bin/kubectl [kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth create -f /home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/pod.yaml --namespace=e2e-tests-kubectl-yqqf4] [] <nil> the path "/home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/pod.yaml" does not exist
[] <nil> 0xc20866ec00 exit status 1 <nil> true [0xc20824c450 0xc20824c4c0 0xc20824c528] [0xc20824c450 0xc20824c4c0 0xc20824c528] [0xc20824c4b8 0xc20824c510] [0x6bd870 0x6bd870] 0xc208b907e0}:
Command stdout:
stderr:
the path "/home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/pod.yaml" does not exist
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1046
------------------------------
S
------------------------------
ReplicationController
should serve a basic image on each replica with a private image
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/rc.go:45
[BeforeEach] ReplicationController
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:15:36.397: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-replication-controller-s0jun
Oct 8 16:15:36.398: INFO: Get service account default in ns e2e-tests-replication-controller-s0jun failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:15:38.401: INFO: Service account default in ns e2e-tests-replication-controller-s0jun with secrets found. (2.003726842s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:15:38.401: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-replication-controller-s0jun
Oct 8 16:15:38.403: INFO: Service account default in ns e2e-tests-replication-controller-s0jun with secrets found. (2.031681ms)
[It] should serve a basic image on each replica with a private image
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/rc.go:45
[AfterEach] ReplicationController
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 16:15:38.408: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:15:38.411: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:15:38.411: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-replication-controller-s0jun" for this suite.
S [SKIPPING] [7.026 seconds]
ReplicationController
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/rc.go:46
should serve a basic image on each replica with a private image [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/rc.go:45
Oct 8 16:15:38.403: Only supported for providers [gce gke] (not local)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:211
------------------------------
Services
should check NodePort out-of-range
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:704
[BeforeEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:15:43.423: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-6ss1p
Oct 8 16:15:43.424: INFO: Get service account default in ns e2e-tests-services-6ss1p failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:15:45.427: INFO: Service account default in ns e2e-tests-services-6ss1p with secrets found. (2.003437437s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:15:45.427: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-6ss1p
Oct 8 16:15:45.429: INFO: Service account default in ns e2e-tests-services-6ss1p with secrets found. (2.450862ms)
[BeforeEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:54
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
[It] should check NodePort out-of-range
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:704
STEP: creating service nodeport-range-test with type NodePort in namespace e2e-tests-services-6ss1p
STEP: changing service nodeport-range-test to out-of-range NodePort 20855
STEP: deleting original service nodeport-range-test
STEP: creating service nodeport-range-test with out-of-range NodePort 20855
[AfterEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 16:15:45.740: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:15:45.743: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:15:45.743: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-services-6ss1p" for this suite.
[AfterEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:64
• [SLOW TEST:7.366 seconds]
Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:861
should check NodePort out-of-range
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:704
------------------------------
Kubectl client Kubectl cluster-info
should check if Kubernetes master services is included in cluster-info
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:243
[BeforeEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:77
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
Oct 8 16:15:50.790: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-1thee
Oct 8 16:15:50.791: INFO: Get service account default in ns e2e-tests-kubectl-1thee failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:15:52.793: INFO: Service account default in ns e2e-tests-kubectl-1thee with secrets found. (2.00260778s)
[It] should check if Kubernetes master services is included in cluster-info
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:243
STEP: validating cluster-info
Oct 8 16:15:52.793: INFO: Running '/home/yifan/google-cloud-sdk/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth cluster-info'
Oct 8 16:15:52.812: INFO: Kubernetes master is running at 127.0.0.1:8080
[AfterEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:84
STEP: Destroying namespace for this suite e2e-tests-kubectl-1thee
• [SLOW TEST:7.036 seconds]
Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:682
Kubectl cluster-info
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:244
should check if Kubernetes master services is included in cluster-info
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:243
------------------------------
hostPath
should give a volume the correct mode
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/host_path.go:77
[BeforeEach] hostPath
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/host_path.go:53
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:15:57.827: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-hostpath-rizrz
Oct 8 16:15:57.828: INFO: Get service account default in ns e2e-tests-hostpath-rizrz failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:15:59.829: INFO: Service account default in ns e2e-tests-hostpath-rizrz with secrets found. (2.002913867s)
[It] should give a volume the correct mode
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/host_path.go:77
STEP: Creating a pod to test hostPath mode
Oct 8 16:15:59.834: INFO: Waiting up to 5m0s for pod pod-host-path-test status to be success or failure
Oct 8 16:15:59.837: INFO: No Status.Info for container 'test-container-1' in pod 'pod-host-path-test' yet
Oct 8 16:15:59.837: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-rizrz' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.808197ms elapsed)
Oct 8 16:16:01.839: INFO: No Status.Info for container 'test-container-1' in pod 'pod-host-path-test' yet
Oct 8 16:16:01.839: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-rizrz' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.004845882s elapsed)
Oct 8 16:16:03.841: INFO: No Status.Info for container 'test-container-1' in pod 'pod-host-path-test' yet
Oct 8 16:16:03.841: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-rizrz' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.007387871s elapsed)
Oct 8 16:16:05.843: INFO: No Status.Info for container 'test-container-1' in pod 'pod-host-path-test' yet
Oct 8 16:16:05.843: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-rizrz' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.009262912s elapsed)
Oct 8 16:16:07.845: INFO: Nil State.Terminated for container 'test-container-1' in pod 'pod-host-path-test' in namespace 'e2e-tests-hostpath-rizrz' so far
Oct 8 16:16:07.845: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-rizrz' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.011273841s elapsed)
Oct 8 16:16:09.847: INFO: Nil State.Terminated for container 'test-container-1' in pod 'pod-host-path-test' in namespace 'e2e-tests-hostpath-rizrz' so far
Oct 8 16:16:09.847: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-rizrz' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.013504811s elapsed)
STEP: Saw pod success
Oct 8 16:16:11.851: INFO: Waiting up to 5m0s for pod pod-host-path-test status to be success or failure
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-host-path-test container test-container-1: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267
mode of file "/test-volume": dtrwxrwxrwx
[AfterEach] hostPath
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/host_path.go:60
STEP: Destroying namespace for this suite e2e-tests-hostpath-rizrz
• [SLOW TEST:19.197 seconds]
hostPath
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/host_path.go:104
should give a volume the correct mode
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/host_path.go:77
------------------------------
Pods
should *not* be restarted with a docker exec "cat /tmp/health" liveness probe
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:516
[BeforeEach] Pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:16:17.024: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-3rviq
Oct 8 16:16:17.025: INFO: Get service account default in ns e2e-tests-pods-3rviq failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:16:19.027: INFO: Service account default in ns e2e-tests-pods-3rviq with secrets found. (2.003319014s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:16:19.027: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-3rviq
Oct 8 16:16:19.028: INFO: Service account default in ns e2e-tests-pods-3rviq with secrets found. (1.240673ms)
[It] should *not* be restarted with a docker exec "cat /tmp/health" liveness probe
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:516
STEP: Creating pod liveness-exec in namespace e2e-tests-pods-3rviq
Oct 8 16:16:19.035: INFO: Waiting up to 5m0s for pod liveness-exec status to be !pending
Oct 8 16:16:19.037: INFO: Waiting for pod liveness-exec in namespace 'e2e-tests-pods-3rviq' status to be '!pending'(found phase: "Pending", readiness: false) (2.47043ms elapsed)
Oct 8 16:16:21.039: INFO: Waiting for pod liveness-exec in namespace 'e2e-tests-pods-3rviq' status to be '!pending'(found phase: "Pending", readiness: false) (2.004378173s elapsed)
Oct 8 16:16:23.041: INFO: Waiting for pod liveness-exec in namespace 'e2e-tests-pods-3rviq' status to be '!pending'(found phase: "Pending", readiness: false) (4.006617378s elapsed)
Oct 8 16:16:25.043: INFO: Waiting for pod liveness-exec in namespace 'e2e-tests-pods-3rviq' status to be '!pending'(found phase: "Pending", readiness: false) (6.008420275s elapsed)
Oct 8 16:16:27.045: INFO: Waiting for pod liveness-exec in namespace 'e2e-tests-pods-3rviq' status to be '!pending'(found phase: "Pending", readiness: false) (8.010527642s elapsed)
Oct 8 16:16:29.048: INFO: Waiting for pod liveness-exec in namespace 'e2e-tests-pods-3rviq' status to be '!pending'(found phase: "Pending", readiness: false) (10.013327385s elapsed)
Oct 8 16:16:31.050: INFO: Saw pod 'liveness-exec' in namespace 'e2e-tests-pods-3rviq' out of pending state (found '"Running"')
STEP: Started pod liveness-exec in namespace e2e-tests-pods-3rviq
STEP: checking the pod's current state and verifying that restartCount is present
STEP: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] Pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 16:18:31.280: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:18:31.285: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:18:31.285: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-3rviq" for this suite.
• [SLOW TEST:139.333 seconds]
Pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:778
should *not* be restarted with a docker exec "cat /tmp/health" liveness probe
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:516
------------------------------
EmptyDir volumes
should support (root,0666,default)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:78
[BeforeEach] EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:18:36.394: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-qwqj2
Oct 8 16:18:36.396: INFO: Get service account default in ns e2e-tests-emptydir-qwqj2 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:18:38.398: INFO: Service account default in ns e2e-tests-emptydir-qwqj2 with secrets found. (2.004257147s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:18:38.398: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-qwqj2
Oct 8 16:18:38.399: INFO: Service account default in ns e2e-tests-emptydir-qwqj2 with secrets found. (1.220359ms)
[It] should support (root,0666,default)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:78
STEP: Creating a pod to test emptydir 0666 on node default medium
Oct 8 16:18:38.403: INFO: Waiting up to 5m0s for pod pod-e7e9d95e-6e12-11e5-bcd2-28d244b00276 status to be success or failure
Oct 8 16:18:38.405: INFO: No Status.Info for container 'test-container' in pod 'pod-e7e9d95e-6e12-11e5-bcd2-28d244b00276' yet
Oct 8 16:18:38.405: INFO: Waiting for pod pod-e7e9d95e-6e12-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-qwqj2' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.834234ms elapsed)
Oct 8 16:18:40.406: INFO: No Status.Info for container 'test-container' in pod 'pod-e7e9d95e-6e12-11e5-bcd2-28d244b00276' yet
Oct 8 16:18:40.406: INFO: Waiting for pod pod-e7e9d95e-6e12-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-qwqj2' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.003409025s elapsed)
Oct 8 16:18:42.408: INFO: No Status.Info for container 'test-container' in pod 'pod-e7e9d95e-6e12-11e5-bcd2-28d244b00276' yet
Oct 8 16:18:42.408: INFO: Waiting for pod pod-e7e9d95e-6e12-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-qwqj2' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.005100713s elapsed)
Oct 8 16:18:44.411: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-e7e9d95e-6e12-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-qwqj2' so far
Oct 8 16:18:44.411: INFO: Waiting for pod pod-e7e9d95e-6e12-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-qwqj2' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.00813126s elapsed)
Oct 8 16:18:46.413: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-e7e9d95e-6e12-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-qwqj2' so far
Oct 8 16:18:46.413: INFO: Waiting for pod pod-e7e9d95e-6e12-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-qwqj2' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.010156456s elapsed)
Oct 8 16:18:48.416: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-e7e9d95e-6e12-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-qwqj2' so far
Oct 8 16:18:48.416: INFO: Waiting for pod pod-e7e9d95e-6e12-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-qwqj2' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.012661014s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-e7e9d95e-6e12-11e5-bcd2-28d244b00276 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-rw-rw-
[AfterEach] EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 16:18:50.526: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:18:50.530: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:18:50.530: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-qwqj2" for this suite.
• [SLOW TEST:19.215 seconds]
EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (root,0666,default)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:78
------------------------------
Proxy version v1
should proxy to cadvisor
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:60
[BeforeEach] version v1
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:18:55.572: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-8nk9b
Oct 8 16:18:55.573: INFO: Get service account default in ns e2e-tests-proxy-8nk9b failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:18:57.575: INFO: Service account default in ns e2e-tests-proxy-8nk9b with secrets found. (2.003718089s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:18:57.575: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-8nk9b
Oct 8 16:18:57.577: INFO: Service account default in ns e2e-tests-proxy-8nk9b with secrets found. (1.695993ms)
[It] should proxy to cadvisor
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:60
Oct 8 16:18:57.587: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 7.181874ms)
Oct 8 16:18:57.589: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.953445ms)
Oct 8 16:18:57.590: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.661136ms)
Oct 8 16:18:57.592: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.579611ms)
Oct 8 16:18:57.593: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.516801ms)
Oct 8 16:18:57.595: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.592744ms)
Oct 8 16:18:57.597: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 1.684714ms)
Oct 8 16:18:57.774: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 177.558242ms)
Oct 8 16:18:57.972: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 197.930003ms)
Oct 8 16:18:58.174: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 201.822057ms)
Oct 8 16:18:58.372: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 197.398469ms)
Oct 8 16:18:58.572: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 199.891712ms)
Oct 8 16:18:58.772: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 200.170628ms)
Oct 8 16:18:58.972: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 199.811366ms)
Oct 8 16:18:59.173: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 201.432898ms)
Oct 8 16:18:59.372: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 198.397815ms)
Oct 8 16:18:59.573: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 201.851715ms)
Oct 8 16:18:59.773: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 199.785823ms)
Oct 8 16:18:59.973: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 199.535839ms)
Oct 8 16:19:00.174: INFO: /api/v1/proxy/nodes/127.0.0.1:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 200.640504ms)
[AfterEach] version v1
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 16:19:00.174: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:19:00.372: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:19:00.372: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-proxy-8nk9b" for this suite.
• [SLOW TEST:5.401 seconds]
Proxy
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:41
version v1
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:40
should proxy to cadvisor
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/proxy.go:60
------------------------------
Pod Disks
should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pd.go:296
[BeforeEach] Pod Disks
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:19:00.973: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pod-disks-ub0n8
Oct 8 16:19:00.974: INFO: Get service account default in ns e2e-tests-pod-disks-ub0n8 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:19:02.976: INFO: Service account default in ns e2e-tests-pod-disks-ub0n8 with secrets found. (2.002564197s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:19:02.976: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pod-disks-ub0n8
Oct 8 16:19:02.977: INFO: Service account default in ns e2e-tests-pod-disks-ub0n8 with secrets found. (1.194609ms)
[BeforeEach] Pod Disks
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pd.go:66
[AfterEach] Pod Disks
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 16:19:02.980: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:19:02.982: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:19:02.982: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-pod-disks-ub0n8" for this suite.
S [SKIPPING] in Spec Setup (BeforeEach) [7.018 seconds]
Pod Disks
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pd.go:297
should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [BeforeEach]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pd.go:296
Oct 8 16:19:02.977: Requires at least 2 nodes (not -1)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:211
------------------------------
Kubectl client Kubectl version
should check is all data is printed
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:536
[BeforeEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:77
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
Oct 8 16:19:07.991: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-6mmhi
Oct 8 16:19:07.992: INFO: Get service account default in ns e2e-tests-kubectl-6mmhi failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:19:09.994: INFO: Service account default in ns e2e-tests-kubectl-6mmhi with secrets found. (2.002252451s)
[It] should check is all data is printed
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:536
Oct 8 16:19:09.994: INFO: Running '/home/yifan/google-cloud-sdk/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth version'
Oct 8 16:19:10.012: INFO: Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.6", GitCommit:"388061f00f0d9e4d641f9ed4971c775e1654579d", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2+", GitVersion:"v1.2.0-alpha.1.589+081615b38dd04f-dirty", GitCommit:"081615b38dd04f14aa448f47ae4d3a780f27f154", GitTreeState:"dirty"}
[AfterEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:84
STEP: Destroying namespace for this suite e2e-tests-kubectl-6mmhi
• [SLOW TEST:7.030 seconds]
Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:682
Kubectl version
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:537
should check is all data is printed
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:536
------------------------------
S
------------------------------
Pods
should be restarted with a docker exec "cat /tmp/health" liveness probe
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:490
[BeforeEach] Pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:19:15.023: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-4n8ky
Oct 8 16:19:15.024: INFO: Get service account default in ns e2e-tests-pods-4n8ky failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:19:17.026: INFO: Service account default in ns e2e-tests-pods-4n8ky with secrets found. (2.003424395s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:19:17.026: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-4n8ky
Oct 8 16:19:17.027: INFO: Service account default in ns e2e-tests-pods-4n8ky with secrets found. (1.326022ms)
[It] should be restarted with a docker exec "cat /tmp/health" liveness probe
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:490
STEP: Creating pod liveness-exec in namespace e2e-tests-pods-4n8ky
Oct 8 16:19:17.031: INFO: Waiting up to 5m0s for pod liveness-exec status to be !pending
Oct 8 16:19:17.035: INFO: Waiting for pod liveness-exec in namespace 'e2e-tests-pods-4n8ky' status to be '!pending'(found phase: "Pending", readiness: false) (3.892691ms elapsed)
Oct 8 16:19:19.037: INFO: Waiting for pod liveness-exec in namespace 'e2e-tests-pods-4n8ky' status to be '!pending'(found phase: "Pending", readiness: false) (2.006115009s elapsed)
Oct 8 16:19:21.039: INFO: Waiting for pod liveness-exec in namespace 'e2e-tests-pods-4n8ky' status to be '!pending'(found phase: "Pending", readiness: false) (4.008354114s elapsed)
Oct 8 16:19:23.041: INFO: Waiting for pod liveness-exec in namespace 'e2e-tests-pods-4n8ky' status to be '!pending'(found phase: "Pending", readiness: false) (6.010243389s elapsed)
Oct 8 16:19:25.043: INFO: Waiting for pod liveness-exec in namespace 'e2e-tests-pods-4n8ky' status to be '!pending'(found phase: "Pending", readiness: false) (8.01248469s elapsed)
Oct 8 16:19:27.045: INFO: Waiting for pod liveness-exec in namespace 'e2e-tests-pods-4n8ky' status to be '!pending'(found phase: "Pending", readiness: false) (10.014297688s elapsed)
Oct 8 16:19:29.047: INFO: Saw pod 'liveness-exec' in namespace 'e2e-tests-pods-4n8ky' out of pending state (found '"Running"')
STEP: Started pod liveness-exec in namespace e2e-tests-pods-4n8ky
STEP: checking the pod's current state and verifying that restartCount is present
STEP: Initial restart count of pod liveness-exec is 0
STEP: Restart count of pod e2e-tests-pods-4n8ky/liveness-exec is now 1 (14.020136694s elapsed)
STEP: deleting the pod
[AfterEach] Pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 16:19:43.127: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:19:43.129: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:19:43.129: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-4n8ky" for this suite.
• [SLOW TEST:33.194 seconds]
Pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:778
should be restarted with a docker exec "cat /tmp/health" liveness probe
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:490
------------------------------
Kubectl client Update Demo
should scale a replication controller
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:113
[BeforeEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:77
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
Oct 8 16:19:48.216: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-jb236
Oct 8 16:19:48.220: INFO: Get service account default in ns e2e-tests-kubectl-jb236 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:19:50.222: INFO: Service account default in ns e2e-tests-kubectl-jb236 with secrets found. (2.005966171s)
[BeforeEach] Update Demo
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:92
[It] should scale a replication controller
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:113
STEP: creating a replication controller
Oct 8 16:19:50.223: INFO: Running '/home/yifan/google-cloud-sdk/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth create -f /home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-jb236'
STEP: using delete to clean up resources
Oct 8 16:19:50.240: INFO: Running '/home/yifan/google-cloud-sdk/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth stop --grace-period=0 -f /home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-jb236'
[AfterEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:84
STEP: Destroying namespace for this suite e2e-tests-kubectl-jb236
• Failure [7.050 seconds]
Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:682
Update Demo
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:124
should scale a replication controller [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:113
Oct 8 16:19:50.239: Error running &{/home/yifan/google-cloud-sdk/bin/kubectl [kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth create -f /home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-jb236] [] <nil> the path "/home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml" does not exist
[] <nil> 0xc208396840 exit status 1 <nil> true [0xc20804eb38 0xc20804ebc0 0xc20804ec08] [0xc20804eb38 0xc20804ebc0 0xc20804ec08] [0xc20804eb88 0xc20804ebf0] [0x6bd870 0x6bd870] 0xc20838ec60}:
Command stdout:
stderr:
the path "/home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml" does not exist
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1046
------------------------------
Namespaces
should delete fast enough (90 percent of 100 namespaces in 150 seconds)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/namespace.go:115
[BeforeEach] Namespaces
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/namespace.go:107
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
[It] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/namespace.go:115
STEP: Creating testing namespaces
Oct 8 16:19:55.276: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-0-yscm6
Oct 8 16:19:55.317: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-8-bttwl
Oct 8 16:19:55.317: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-9-dsj93
Oct 8 16:19:55.317: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-6-bl5gq
Oct 8 16:19:55.318: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-7-j42f0
Oct 8 16:19:55.318: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-1-v589j
Oct 8 16:19:55.318: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-2-j088x
Oct 8 16:19:55.319: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-3-2h2l8
Oct 8 16:19:55.319: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-4-l2e70
Oct 8 16:19:55.319: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-5-6ayiv
Oct 8 16:19:55.496: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-10-8t4pm
Oct 8 16:19:55.681: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-11-bw9pb
Oct 8 16:19:55.877: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-12-zvrod
Oct 8 16:19:56.097: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-13-g2shu
Oct 8 16:19:56.310: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-14-avkw5
Oct 8 16:19:56.488: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-15-nhcyo
Oct 8 16:19:56.714: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-16-q1taj
Oct 8 16:19:56.879: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-17-1nvi4
Oct 8 16:19:57.078: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-18-4c2ol
Oct 8 16:19:57.286: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-19-hxevo
Oct 8 16:19:57.489: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-20-5nfcg
Oct 8 16:19:57.682: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-21-hy3ft
Oct 8 16:19:57.881: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-22-tdpgi
Oct 8 16:19:58.082: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-23-jofn0
Oct 8 16:19:58.278: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-24-rgzt8
Oct 8 16:19:58.481: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-25-n8ay9
Oct 8 16:19:58.682: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-26-u08mk
Oct 8 16:19:58.881: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-27-t621e
Oct 8 16:19:59.081: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-28-gff96
Oct 8 16:19:59.280: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-29-gxttm
Oct 8 16:19:59.481: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-30-vit1e
Oct 8 16:19:59.666: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-31-y4lia
Oct 8 16:19:59.892: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-32-rrae9
Oct 8 16:20:00.067: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-33-he1rm
Oct 8 16:20:00.266: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-34-l51oc
Oct 8 16:20:00.466: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-35-shdfh
Oct 8 16:20:00.668: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-36-84g15
Oct 8 16:20:00.893: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-37-fcqgb
Oct 8 16:20:01.069: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-38-smxxz
Oct 8 16:20:01.266: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-39-ts00q
Oct 8 16:20:01.466: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-40-9f0yw
Oct 8 16:20:01.667: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-41-upw26
Oct 8 16:20:01.891: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-42-kx7tb
Oct 8 16:20:02.079: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-43-ef8ww
Oct 8 16:20:02.281: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-44-oxbnx
Oct 8 16:20:02.496: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-45-bt1r3
Oct 8 16:20:02.680: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-46-80z6a
Oct 8 16:20:02.887: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-47-lq1l5
Oct 8 16:20:03.078: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-48-ogvhx
Oct 8 16:20:03.281: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-49-8btd7
Oct 8 16:20:03.466: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-50-hpoy2
Oct 8 16:20:03.666: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-51-5j4y4
Oct 8 16:20:03.901: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-52-7puwv
Oct 8 16:20:04.082: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-53-6k9xv
Oct 8 16:20:04.279: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-54-r3ol9
Oct 8 16:20:04.493: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-55-7dd9h
Oct 8 16:20:04.687: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-56-p3xw9
Oct 8 16:20:04.888: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-57-pi6c5
Oct 8 16:20:05.090: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-58-704pq
Oct 8 16:20:05.289: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-59-qwutm
Oct 8 16:20:05.489: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-60-s1661
Oct 8 16:20:05.689: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-61-imw6l
Oct 8 16:20:05.908: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-62-i7wsn
Oct 8 16:20:06.090: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-63-9tq94
Oct 8 16:20:06.287: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-64-okyei
Oct 8 16:20:06.495: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-65-hbs9e
Oct 8 16:20:06.687: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-66-pn0dz
Oct 8 16:20:06.890: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-67-qblra
Oct 8 16:20:07.087: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-68-x6wxf
Oct 8 16:20:07.291: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-69-jzom1
Oct 8 16:20:07.485: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-70-xme00
Oct 8 16:20:07.666: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-71-wjmff
Oct 8 16:20:07.901: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-72-5coy2
Oct 8 16:20:08.066: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-73-fl5hr
Oct 8 16:20:08.266: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-74-ddvg8
Oct 8 16:20:08.466: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-75-y33me
Oct 8 16:20:08.666: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-76-5zz0f
Oct 8 16:20:08.901: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-77-chw3b
Oct 8 16:20:09.066: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-78-o23km
Oct 8 16:20:09.266: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-79-dwixq
Oct 8 16:20:09.484: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-80-5ulrh
Oct 8 16:20:09.666: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-81-6j1jb
Oct 8 16:20:09.888: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-82-lk4yp
Oct 8 16:20:10.091: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-83-szuj5
Oct 8 16:20:10.311: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-84-wuimq
Oct 8 16:20:10.511: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-85-g5rro
Oct 8 16:20:10.679: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-86-65nb5
Oct 8 16:20:10.936: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-87-z2v0o
Oct 8 16:20:11.101: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-88-qoyq5
Oct 8 16:20:11.281: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-89-r9fc7
Oct 8 16:20:11.492: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-90-t1adz
Oct 8 16:20:11.679: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-91-waqcf
Oct 8 16:20:11.881: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-92-um2ex
Oct 8 16:20:12.083: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-93-vfbem
Oct 8 16:20:12.283: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-94-i8954
Oct 8 16:20:12.466: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-95-m9j9u
Oct 8 16:20:12.666: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-96-g0kuh
Oct 8 16:20:12.896: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-97-bw1rf
Oct 8 16:20:13.081: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-98-a96tr
Oct 8 16:20:13.280: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nslifetest-99-nzfu5
Oct 8 16:20:13.465: INFO: Service account default in ns e2e-tests-nslifetest-0-yscm6 with secrets found. (18.188862958s)
Oct 8 16:20:13.665: INFO: Service account default in ns e2e-tests-nslifetest-8-bttwl with secrets found. (18.347954818s)
Oct 8 16:20:13.865: INFO: Service account default in ns e2e-tests-nslifetest-9-dsj93 with secrets found. (18.547387826s)
Oct 8 16:20:14.066: INFO: Service account default in ns e2e-tests-nslifetest-6-bl5gq with secrets found. (18.748720226s)
Oct 8 16:20:14.267: INFO: Service account default in ns e2e-tests-nslifetest-7-j42f0 with secrets found. (18.94975438s)
Oct 8 16:20:14.465: INFO: Service account default in ns e2e-tests-nslifetest-1-v589j with secrets found. (19.146713909s)
Oct 8 16:20:14.665: INFO: Service account default in ns e2e-tests-nslifetest-2-j088x with secrets found. (19.346180544s)
Oct 8 16:20:14.865: INFO: Service account default in ns e2e-tests-nslifetest-3-2h2l8 with secrets found. (19.545792766s)
Oct 8 16:20:15.065: INFO: Service account default in ns e2e-tests-nslifetest-4-l2e70 with secrets found. (19.745662119s)
Oct 8 16:20:15.265: INFO: Service account default in ns e2e-tests-nslifetest-5-6ayiv with secrets found. (19.9452232s)
Oct 8 16:20:15.465: INFO: Service account default in ns e2e-tests-nslifetest-10-8t4pm with secrets found. (19.968917248s)
Oct 8 16:20:15.665: INFO: Service account default in ns e2e-tests-nslifetest-11-bw9pb with secrets found. (19.983596012s)
Oct 8 16:20:15.865: INFO: Service account default in ns e2e-tests-nslifetest-12-zvrod with secrets found. (19.988035955s)
Oct 8 16:20:16.065: INFO: Service account default in ns e2e-tests-nslifetest-13-g2shu with secrets found. (19.967447449s)
Oct 8 16:20:16.265: INFO: Service account default in ns e2e-tests-nslifetest-14-avkw5 with secrets found. (19.954443304s)
Oct 8 16:20:16.466: INFO: Service account default in ns e2e-tests-nslifetest-15-nhcyo with secrets found. (19.977755777s)
Oct 8 16:20:16.665: INFO: Service account default in ns e2e-tests-nslifetest-16-q1taj with secrets found. (19.95065236s)
Oct 8 16:20:16.865: INFO: Service account default in ns e2e-tests-nslifetest-17-1nvi4 with secrets found. (19.985390948s)
Oct 8 16:20:17.065: INFO: Service account default in ns e2e-tests-nslifetest-18-4c2ol with secrets found. (19.986305733s)
Oct 8 16:20:17.265: INFO: Service account default in ns e2e-tests-nslifetest-19-hxevo with secrets found. (19.978729552s)
Oct 8 16:20:17.465: INFO: Service account default in ns e2e-tests-nslifetest-20-5nfcg with secrets found. (19.975407919s)
Oct 8 16:20:17.665: INFO: Service account default in ns e2e-tests-nslifetest-21-hy3ft with secrets found. (19.983086286s)
Oct 8 16:20:17.865: INFO: Service account default in ns e2e-tests-nslifetest-22-tdpgi with secrets found. (19.984051235s)
Oct 8 16:20:18.065: INFO: Service account default in ns e2e-tests-nslifetest-23-jofn0 with secrets found. (19.983045033s)
Oct 8 16:20:18.265: INFO: Service account default in ns e2e-tests-nslifetest-24-rgzt8 with secrets found. (19.986194115s)
Oct 8 16:20:18.465: INFO: Service account default in ns e2e-tests-nslifetest-25-n8ay9 with secrets found. (19.98355112s)
Oct 8 16:20:18.665: INFO: Service account default in ns e2e-tests-nslifetest-26-u08mk with secrets found. (19.982690556s)
Oct 8 16:20:18.865: INFO: Service account default in ns e2e-tests-nslifetest-27-t621e with secrets found. (19.984420875s)
Oct 8 16:20:19.065: INFO: Service account default in ns e2e-tests-nslifetest-28-gff96 with secrets found. (19.984022241s)
Oct 8 16:20:19.265: INFO: Service account default in ns e2e-tests-nslifetest-29-gxttm with secrets found. (19.985253907s)
Oct 8 16:20:19.465: INFO: Service account default in ns e2e-tests-nslifetest-30-vit1e with secrets found. (19.983491615s)
Oct 8 16:20:19.665: INFO: Service account default in ns e2e-tests-nslifetest-31-y4lia with secrets found. (19.998192397s)
Oct 8 16:20:19.865: INFO: Service account default in ns e2e-tests-nslifetest-32-rrae9 with secrets found. (19.972383647s)
Oct 8 16:20:20.065: INFO: Service account default in ns e2e-tests-nslifetest-33-he1rm with secrets found. (19.997545446s)
Oct 8 16:20:20.265: INFO: Service account default in ns e2e-tests-nslifetest-34-l51oc with secrets found. (19.999154964s)
Oct 8 16:20:20.466: INFO: Service account default in ns e2e-tests-nslifetest-35-shdfh with secrets found. (19.999917681s)
Oct 8 16:20:20.666: INFO: Service account default in ns e2e-tests-nslifetest-36-84g15 with secrets found. (19.998015602s)
Oct 8 16:20:20.865: INFO: Service account default in ns e2e-tests-nslifetest-37-fcqgb with secrets found. (19.971718564s)
Oct 8 16:20:21.065: INFO: Service account default in ns e2e-tests-nslifetest-38-smxxz with secrets found. (19.995512153s)
Oct 8 16:20:21.267: INFO: Service account default in ns e2e-tests-nslifetest-39-ts00q with secrets found. (20.000684576s)
Oct 8 16:20:21.465: INFO: Service account default in ns e2e-tests-nslifetest-40-9f0yw with secrets found. (19.998429465s)
Oct 8 16:20:21.665: INFO: Service account default in ns e2e-tests-nslifetest-41-upw26 with secrets found. (19.998120512s)
Oct 8 16:20:21.865: INFO: Service account default in ns e2e-tests-nslifetest-42-kx7tb with secrets found. (19.974004161s)
Oct 8 16:20:22.065: INFO: Service account default in ns e2e-tests-nslifetest-43-ef8ww with secrets found. (19.985777624s)
Oct 8 16:20:22.265: INFO: Service account default in ns e2e-tests-nslifetest-44-oxbnx with secrets found. (19.983312067s)
Oct 8 16:20:22.465: INFO: Service account default in ns e2e-tests-nslifetest-45-bt1r3 with secrets found. (19.968846686s)
Oct 8 16:20:22.665: INFO: Service account default in ns e2e-tests-nslifetest-46-80z6a with secrets found. (19.984324915s)
Oct 8 16:20:22.865: INFO: Service account default in ns e2e-tests-nslifetest-47-lq1l5 with secrets found. (19.977559864s)
Oct 8 16:20:23.065: INFO: Service account default in ns e2e-tests-nslifetest-48-ogvhx with secrets found. (19.986479982s)
Oct 8 16:20:23.265: INFO: Service account default in ns e2e-tests-nslifetest-49-8btd7 with secrets found. (19.98441016s)
Oct 8 16:20:23.466: INFO: Service account default in ns e2e-tests-nslifetest-50-hpoy2 with secrets found. (20.000016976s)
Oct 8 16:20:23.665: INFO: Service account default in ns e2e-tests-nslifetest-51-5j4y4 with secrets found. (19.998662991s)
Oct 8 16:20:23.865: INFO: Service account default in ns e2e-tests-nslifetest-52-7puwv with secrets found. (19.964548468s)
Oct 8 16:20:24.065: INFO: Service account default in ns e2e-tests-nslifetest-53-6k9xv with secrets found. (19.982953023s)
Oct 8 16:20:24.265: INFO: Service account default in ns e2e-tests-nslifetest-54-r3ol9 with secrets found. (19.985957741s)
Oct 8 16:20:24.465: INFO: Service account default in ns e2e-tests-nslifetest-55-7dd9h with secrets found. (19.971209222s)
Oct 8 16:20:24.665: INFO: Service account default in ns e2e-tests-nslifetest-56-p3xw9 with secrets found. (19.977358554s)
Oct 8 16:20:24.865: INFO: Service account default in ns e2e-tests-nslifetest-57-pi6c5 with secrets found. (19.976695796s)
Oct 8 16:20:25.065: INFO: Service account default in ns e2e-tests-nslifetest-58-704pq with secrets found. (19.974352182s)
Oct 8 16:20:25.265: INFO: Service account default in ns e2e-tests-nslifetest-59-qwutm with secrets found. (19.97535802s)
Oct 8 16:20:25.465: INFO: Service account default in ns e2e-tests-nslifetest-60-s1661 with secrets found. (19.975304958s)
Oct 8 16:20:25.665: INFO: Service account default in ns e2e-tests-nslifetest-61-imw6l with secrets found. (19.975569005s)
Oct 8 16:20:25.867: INFO: Service account default in ns e2e-tests-nslifetest-62-i7wsn with secrets found. (19.958396433s)
Oct 8 16:20:26.065: INFO: Service account default in ns e2e-tests-nslifetest-63-9tq94 with secrets found. (19.975063904s)
Oct 8 16:20:26.265: INFO: Service account default in ns e2e-tests-nslifetest-64-okyei with secrets found. (19.977680435s)
Oct 8 16:20:26.465: INFO: Service account default in ns e2e-tests-nslifetest-65-hbs9e with secrets found. (19.969258788s)
Oct 8 16:20:26.665: INFO: Service account default in ns e2e-tests-nslifetest-66-pn0dz with secrets found. (19.977472364s)
Oct 8 16:20:26.865: INFO: Service account default in ns e2e-tests-nslifetest-67-qblra with secrets found. (19.974295835s)
Oct 8 16:20:27.065: INFO: Service account default in ns e2e-tests-nslifetest-68-x6wxf with secrets found. (19.977964499s)
Oct 8 16:20:27.265: INFO: Service account default in ns e2e-tests-nslifetest-69-jzom1 with secrets found. (19.973564008s)
Oct 8 16:20:27.465: INFO: Service account default in ns e2e-tests-nslifetest-70-xme00 with secrets found. (19.979416416s)
Oct 8 16:20:27.665: INFO: Service account default in ns e2e-tests-nslifetest-71-wjmff with secrets found. (19.99851465s)
Oct 8 16:20:27.865: INFO: Service account default in ns e2e-tests-nslifetest-72-5coy2 with secrets found. (19.963181593s)
Oct 8 16:20:28.065: INFO: Service account default in ns e2e-tests-nslifetest-73-fl5hr with secrets found. (19.999427786s)
Oct 8 16:20:28.265: INFO: Service account default in ns e2e-tests-nslifetest-74-ddvg8 with secrets found. (19.999483238s)
Oct 8 16:20:28.465: INFO: Service account default in ns e2e-tests-nslifetest-75-y33me with secrets found. (19.999318633s)
Oct 8 16:20:28.666: INFO: Service account default in ns e2e-tests-nslifetest-76-5zz0f with secrets found. (19.999449573s)
Oct 8 16:20:28.865: INFO: Service account default in ns e2e-tests-nslifetest-77-chw3b with secrets found. (19.963543712s)
Oct 8 16:20:29.067: INFO: Service account default in ns e2e-tests-nslifetest-78-o23km with secrets found. (20.001387751s)
Oct 8 16:20:29.265: INFO: Service account default in ns e2e-tests-nslifetest-79-dwixq with secrets found. (19.999116081s)
Oct 8 16:20:29.465: INFO: Service account default in ns e2e-tests-nslifetest-80-5ulrh with secrets found. (19.980837861s)
Oct 8 16:20:29.665: INFO: Service account default in ns e2e-tests-nslifetest-81-6j1jb with secrets found. (19.998922337s)
Oct 8 16:20:29.865: INFO: Service account default in ns e2e-tests-nslifetest-82-lk4yp with secrets found. (19.976986954s)
Oct 8 16:20:30.065: INFO: Service account default in ns e2e-tests-nslifetest-83-szuj5 with secrets found. (19.974070566s)
Oct 8 16:20:30.265: INFO: Service account default in ns e2e-tests-nslifetest-84-wuimq with secrets found. (19.953420849s)
Oct 8 16:20:30.477: INFO: Service account default in ns e2e-tests-nslifetest-85-g5rro with secrets found. (19.965780555s)
Oct 8 16:20:30.665: INFO: Service account default in ns e2e-tests-nslifetest-86-65nb5 with secrets found. (19.98595289s)
Oct 8 16:20:30.865: INFO: Service account default in ns e2e-tests-nslifetest-87-z2v0o with secrets found. (19.928967255s)
Oct 8 16:20:31.066: INFO: Service account default in ns e2e-tests-nslifetest-88-qoyq5 with secrets found. (19.964310181s)
Oct 8 16:20:31.265: INFO: Service account default in ns e2e-tests-nslifetest-89-r9fc7 with secrets found. (19.983593586s)
Oct 8 16:20:31.466: INFO: Service account default in ns e2e-tests-nslifetest-90-t1adz with secrets found. (19.973355596s)
Oct 8 16:20:31.665: INFO: Service account default in ns e2e-tests-nslifetest-91-waqcf with secrets found. (19.985648594s)
Oct 8 16:20:31.865: INFO: Service account default in ns e2e-tests-nslifetest-92-um2ex with secrets found. (19.983151638s)
Oct 8 16:20:32.065: INFO: Service account default in ns e2e-tests-nslifetest-93-vfbem with secrets found. (19.981978778s)
Oct 8 16:20:32.265: INFO: Service account default in ns e2e-tests-nslifetest-94-i8954 with secrets found. (19.981405716s)
Oct 8 16:20:32.465: INFO: Service account default in ns e2e-tests-nslifetest-95-m9j9u with secrets found. (19.998990445s)
Oct 8 16:20:32.665: INFO: Service account default in ns e2e-tests-nslifetest-96-g0kuh with secrets found. (19.998720224s)
Oct 8 16:20:32.864: INFO: Service account default in ns e2e-tests-nslifetest-97-bw1rf with secrets found. (19.96843133s)
Oct 8 16:20:33.066: INFO: Service account default in ns e2e-tests-nslifetest-98-a96tr with secrets found. (19.984082306s)
Oct 8 16:20:33.266: INFO: Service account default in ns e2e-tests-nslifetest-99-nzfu5 with secrets found. (19.98573675s)
STEP: Waiting 10 seconds
STEP: Deleting namespaces
Oct 8 16:20:43.283: INFO: namespace : e2e-tests-nslifetest-1-v589j api call to delete is complete
Oct 8 16:20:43.318: INFO: namespace : e2e-tests-nslifetest-12-zvrod api call to delete is complete
Oct 8 16:20:43.319: INFO: namespace : e2e-tests-nslifetest-13-g2shu api call to delete is complete
Oct 8 16:20:43.319: INFO: namespace : e2e-tests-nslifetest-16-q1taj api call to delete is complete
Oct 8 16:20:43.319: INFO: namespace : e2e-tests-nslifetest-10-8t4pm api call to delete is complete
Oct 8 16:20:43.319: INFO: namespace : e2e-tests-nslifetest-0-yscm6 api call to delete is complete
Oct 8 16:20:43.319: INFO: namespace : e2e-tests-nslifetest-15-nhcyo api call to delete is complete
Oct 8 16:20:43.320: INFO: namespace : e2e-tests-nslifetest-14-avkw5 api call to delete is complete
Oct 8 16:20:43.320: INFO: namespace : e2e-tests-nslifetest-11-bw9pb api call to delete is complete
Oct 8 16:20:43.504: INFO: namespace : e2e-tests-nslifetest-17-1nvi4 api call to delete is complete
Oct 8 16:20:43.691: INFO: namespace : e2e-tests-nslifetest-18-4c2ol api call to delete is complete
Oct 8 16:20:43.882: INFO: namespace : e2e-tests-nslifetest-19-hxevo api call to delete is complete
Oct 8 16:20:44.081: INFO: namespace : e2e-tests-nslifetest-2-j088x api call to delete is complete
Oct 8 16:20:44.266: INFO: namespace : e2e-tests-nslifetest-20-5nfcg api call to delete is complete
Oct 8 16:20:44.482: INFO: namespace : e2e-tests-nslifetest-21-hy3ft api call to delete is complete
Oct 8 16:20:44.667: INFO: namespace : e2e-tests-nslifetest-22-tdpgi api call to delete is complete
Oct 8 16:20:44.892: INFO: namespace : e2e-tests-nslifetest-23-jofn0 api call to delete is complete
Oct 8 16:20:45.067: INFO: namespace : e2e-tests-nslifetest-24-rgzt8 api call to delete is complete
Oct 8 16:20:45.268: INFO: namespace : e2e-tests-nslifetest-25-n8ay9 api call to delete is complete
Oct 8 16:20:45.467: INFO: namespace : e2e-tests-nslifetest-26-u08mk api call to delete is complete
Oct 8 16:20:45.666: INFO: namespace : e2e-tests-nslifetest-27-t621e api call to delete is complete
Oct 8 16:20:45.894: INFO: namespace : e2e-tests-nslifetest-28-gff96 api call to delete is complete
Oct 8 16:20:46.067: INFO: namespace : e2e-tests-nslifetest-29-gxttm api call to delete is complete
Oct 8 16:20:46.282: INFO: namespace : e2e-tests-nslifetest-3-2h2l8 api call to delete is complete
Oct 8 16:20:46.466: INFO: namespace : e2e-tests-nslifetest-30-vit1e api call to delete is complete
Oct 8 16:20:46.666: INFO: namespace : e2e-tests-nslifetest-31-y4lia api call to delete is complete
Oct 8 16:20:46.892: INFO: namespace : e2e-tests-nslifetest-32-rrae9 api call to delete is complete
Oct 8 16:20:47.067: INFO: namespace : e2e-tests-nslifetest-33-he1rm api call to delete is complete
Oct 8 16:20:47.266: INFO: namespace : e2e-tests-nslifetest-34-l51oc api call to delete is complete
Oct 8 16:20:47.482: INFO: namespace : e2e-tests-nslifetest-35-shdfh api call to delete is complete
Oct 8 16:20:47.679: INFO: namespace : e2e-tests-nslifetest-36-84g15 api call to delete is complete
Oct 8 16:20:47.890: INFO: namespace : e2e-tests-nslifetest-37-fcqgb api call to delete is complete
Oct 8 16:20:48.069: INFO: namespace : e2e-tests-nslifetest-38-smxxz api call to delete is complete
Oct 8 16:20:48.266: INFO: namespace : e2e-tests-nslifetest-39-ts00q api call to delete is complete
Oct 8 16:20:48.466: INFO: namespace : e2e-tests-nslifetest-4-l2e70 api call to delete is complete
Oct 8 16:20:48.666: INFO: namespace : e2e-tests-nslifetest-40-9f0yw api call to delete is complete
Oct 8 16:20:48.880: INFO: namespace : e2e-tests-nslifetest-41-upw26 api call to delete is complete
Oct 8 16:20:49.073: INFO: namespace : e2e-tests-nslifetest-42-kx7tb api call to delete is complete
Oct 8 16:20:49.266: INFO: namespace : e2e-tests-nslifetest-43-ef8ww api call to delete is complete
Oct 8 16:20:49.466: INFO: namespace : e2e-tests-nslifetest-44-oxbnx api call to delete is complete
Oct 8 16:20:49.666: INFO: namespace : e2e-tests-nslifetest-45-bt1r3 api call to delete is complete
Oct 8 16:20:49.909: INFO: namespace : e2e-tests-nslifetest-46-80z6a api call to delete is complete
Oct 8 16:20:50.066: INFO: namespace : e2e-tests-nslifetest-47-lq1l5 api call to delete is complete
Oct 8 16:20:50.279: INFO: namespace : e2e-tests-nslifetest-48-ogvhx api call to delete is complete
Oct 8 16:20:50.467: INFO: namespace : e2e-tests-nslifetest-49-8btd7 api call to delete is complete
Oct 8 16:20:50.668: INFO: namespace : e2e-tests-nslifetest-5-6ayiv api call to delete is complete
Oct 8 16:20:50.890: INFO: namespace : e2e-tests-nslifetest-50-hpoy2 api call to delete is complete
Oct 8 16:20:51.081: INFO: namespace : e2e-tests-nslifetest-51-5j4y4 api call to delete is complete
Oct 8 16:20:51.266: INFO: namespace : e2e-tests-nslifetest-52-7puwv api call to delete is complete
Oct 8 16:20:51.484: INFO: namespace : e2e-tests-nslifetest-53-6k9xv api call to delete is complete
Oct 8 16:20:51.666: INFO: namespace : e2e-tests-nslifetest-54-r3ol9 api call to delete is complete
Oct 8 16:20:51.900: INFO: namespace : e2e-tests-nslifetest-55-7dd9h api call to delete is complete
Oct 8 16:20:52.066: INFO: namespace : e2e-tests-nslifetest-56-p3xw9 api call to delete is complete
Oct 8 16:20:52.281: INFO: namespace : e2e-tests-nslifetest-57-pi6c5 api call to delete is complete
Oct 8 16:20:52.481: INFO: namespace : e2e-tests-nslifetest-58-704pq api call to delete is complete
Oct 8 16:20:52.666: INFO: namespace : e2e-tests-nslifetest-59-qwutm api call to delete is complete
Oct 8 16:20:52.890: INFO: namespace : e2e-tests-nslifetest-6-bl5gq api call to delete is complete
Oct 8 16:20:53.081: INFO: namespace : e2e-tests-nslifetest-60-s1661 api call to delete is complete
Oct 8 16:20:53.266: INFO: namespace : e2e-tests-nslifetest-61-imw6l api call to delete is complete
Oct 8 16:20:53.467: INFO: namespace : e2e-tests-nslifetest-62-i7wsn api call to delete is complete
Oct 8 16:20:53.667: INFO: namespace : e2e-tests-nslifetest-63-9tq94 api call to delete is complete
Oct 8 16:20:53.890: INFO: namespace : e2e-tests-nslifetest-64-okyei api call to delete is complete
Oct 8 16:20:54.066: INFO: namespace : e2e-tests-nslifetest-65-hbs9e api call to delete is complete
Oct 8 16:20:54.281: INFO: namespace : e2e-tests-nslifetest-66-pn0dz api call to delete is complete
Oct 8 16:20:54.466: INFO: namespace : e2e-tests-nslifetest-67-qblra api call to delete is complete
Oct 8 16:20:54.666: INFO: namespace : e2e-tests-nslifetest-68-x6wxf api call to delete is complete
Oct 8 16:20:54.895: INFO: namespace : e2e-tests-nslifetest-69-jzom1 api call to delete is complete
Oct 8 16:20:55.086: INFO: namespace : e2e-tests-nslifetest-7-j42f0 api call to delete is complete
Oct 8 16:20:55.266: INFO: namespace : e2e-tests-nslifetest-70-xme00 api call to delete is complete
Oct 8 16:20:55.466: INFO: namespace : e2e-tests-nslifetest-71-wjmff api call to delete is complete
Oct 8 16:20:55.688: INFO: namespace : e2e-tests-nslifetest-72-5coy2 api call to delete is complete
Oct 8 16:20:56.296: INFO: namespace : e2e-tests-nslifetest-75-y33me api call to delete is complete
Oct 8 16:20:56.298: INFO: namespace : e2e-tests-nslifetest-73-fl5hr api call to delete is complete
Oct 8 16:20:56.298: INFO: namespace : e2e-tests-nslifetest-74-ddvg8 api call to delete is complete
Oct 8 16:20:56.476: INFO: namespace : e2e-tests-nslifetest-76-5zz0f api call to delete is complete
Oct 8 16:20:56.666: INFO: namespace : e2e-tests-nslifetest-77-chw3b api call to delete is complete
Oct 8 16:20:56.925: INFO: namespace : e2e-tests-nslifetest-78-o23km api call to delete is complete
Oct 8 16:20:57.067: INFO: namespace : e2e-tests-nslifetest-79-dwixq api call to delete is complete
Oct 8 16:20:57.291: INFO: namespace : e2e-tests-nslifetest-8-bttwl api call to delete is complete
Oct 8 16:20:57.509: INFO: namespace : e2e-tests-nslifetest-80-5ulrh api call to delete is complete
Oct 8 16:20:57.667: INFO: namespace : e2e-tests-nslifetest-81-6j1jb api call to delete is complete
Oct 8 16:20:57.901: INFO: namespace : e2e-tests-nslifetest-82-lk4yp api call to delete is complete
Oct 8 16:20:58.068: INFO: namespace : e2e-tests-nslifetest-83-szuj5 api call to delete is complete
Oct 8 16:20:58.266: INFO: namespace : e2e-tests-nslifetest-84-wuimq api call to delete is complete
Oct 8 16:20:58.466: INFO: namespace : e2e-tests-nslifetest-85-g5rro api call to delete is complete
Oct 8 16:20:58.689: INFO: namespace : e2e-tests-nslifetest-86-65nb5 api call to delete is complete
Oct 8 16:20:58.900: INFO: namespace : e2e-tests-nslifetest-87-z2v0o api call to delete is complete
Oct 8 16:20:59.066: INFO: namespace : e2e-tests-nslifetest-88-qoyq5 api call to delete is complete
Oct 8 16:20:59.287: INFO: namespace : e2e-tests-nslifetest-89-r9fc7 api call to delete is complete
Oct 8 16:20:59.466: INFO: namespace : e2e-tests-nslifetest-9-dsj93 api call to delete is complete
Oct 8 16:20:59.667: INFO: namespace : e2e-tests-nslifetest-90-t1adz api call to delete is complete
Oct 8 16:20:59.898: INFO: namespace : e2e-tests-nslifetest-91-waqcf api call to delete is complete
Oct 8 16:21:00.067: INFO: namespace : e2e-tests-nslifetest-92-um2ex api call to delete is complete
Oct 8 16:21:00.289: INFO: namespace : e2e-tests-nslifetest-93-vfbem api call to delete is complete
Oct 8 16:21:00.467: INFO: namespace : e2e-tests-nslifetest-94-i8954 api call to delete is complete
Oct 8 16:21:00.687: INFO: namespace : e2e-tests-nslifetest-95-m9j9u api call to delete is complete
Oct 8 16:21:00.894: INFO: namespace : e2e-tests-nslifetest-96-g0kuh api call to delete is complete
Oct 8 16:21:01.067: INFO: namespace : e2e-tests-nslifetest-97-bw1rf api call to delete is complete
Oct 8 16:21:01.266: INFO: namespace : e2e-tests-nslifetest-98-a96tr api call to delete is complete
Oct 8 16:21:01.494: INFO: namespace : e2e-tests-nslifetest-99-nzfu5 api call to delete is complete
STEP: Waiting for namespaces to vanish
Oct 8 16:21:03.502: INFO: Remaining namespaces : 69
Oct 8 16:21:05.503: INFO: Remaining namespaces : 67
Oct 8 16:21:07.502: INFO: Remaining namespaces : 64
Oct 8 16:21:09.513: INFO: Remaining namespaces : 61
Oct 8 16:21:11.513: INFO: Remaining namespaces : 58
Oct 8 16:21:13.501: INFO: Remaining namespaces : 55
Oct 8 16:21:15.501: INFO: Remaining namespaces : 52
Oct 8 16:21:17.500: INFO: Remaining namespaces : 49
Oct 8 16:21:19.500: INFO: Remaining namespaces : 47
Oct 8 16:21:21.505: INFO: Remaining namespaces : 44
Oct 8 16:21:23.501: INFO: Remaining namespaces : 41
Oct 8 16:21:25.499: INFO: Remaining namespaces : 38
Oct 8 16:21:27.499: INFO: Remaining namespaces : 35
Oct 8 16:21:29.499: INFO: Remaining namespaces : 32
Oct 8 16:21:31.499: INFO: Remaining namespaces : 29
Oct 8 16:21:33.501: INFO: Remaining namespaces : 26
Oct 8 16:21:35.499: INFO: Remaining namespaces : 24
Oct 8 16:21:37.497: INFO: Remaining namespaces : 21
Oct 8 16:21:39.497: INFO: Remaining namespaces : 18
Oct 8 16:21:41.497: INFO: Remaining namespaces : 15
Oct 8 16:21:43.498: INFO: Remaining namespaces : 12
[AfterEach] Namespaces
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/namespace.go:110
• [SLOW TEST:110.233 seconds]
Namespaces
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/namespace.go:120
should delete fast enough (90 percent of 100 namespaces in 150 seconds)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/namespace.go:115
------------------------------
Deployment
deployment should scale up and down in the right order
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/deployment.go:40
[BeforeEach] Deployment
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:21:45.535: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-deployment-qrtsv
Oct 8 16:21:45.536: INFO: Get service account default in ns e2e-tests-deployment-qrtsv failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:21:47.538: INFO: Service account default in ns e2e-tests-deployment-qrtsv with secrets found. (2.002125115s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:21:47.538: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-deployment-qrtsv
Oct 8 16:21:47.539: INFO: Service account default in ns e2e-tests-deployment-qrtsv with secrets found. (1.148836ms)
[It] deployment should scale up and down in the right order
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/deployment.go:40
Oct 8 16:21:47.544: INFO: Pod name sample-pod: Found 0 pods out of 1
Oct 8 16:21:52.546: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Oct 8 16:21:52.546: INFO: Waiting up to 5m0s for pod nginx-controller-vnk0w status to be running
Oct 8 16:21:52.548: INFO: Waiting for pod nginx-controller-vnk0w in namespace 'e2e-tests-deployment-qrtsv' status to be 'running'(found phase: "Pending", readiness: false) (2.052172ms elapsed)
Oct 8 16:21:54.550: INFO: Waiting for pod nginx-controller-vnk0w in namespace 'e2e-tests-deployment-qrtsv' status to be 'running'(found phase: "Pending", readiness: false) (2.003934747s elapsed)
Oct 8 16:21:56.552: INFO: Waiting for pod nginx-controller-vnk0w in namespace 'e2e-tests-deployment-qrtsv' status to be 'running'(found phase: "Pending", readiness: false) (4.005813618s elapsed)
Oct 8 16:21:58.554: INFO: Found pod 'nginx-controller-vnk0w' on node '127.0.0.1'
STEP: trying to dial each unique pod
Oct 8 16:21:58.558: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:22:00.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:22:02.564: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:22:04.563: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:22:06.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:22:08.563: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:22:10.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:22:12.563: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:22:14.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:22:16.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:22:18.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:22:20.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:22:22.563: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:22:24.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:22:26.563: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:22:28.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:22:30.563: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:22:32.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:22:34.564: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:22:36.564: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:22:38.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:22:40.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:22:42.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:22:44.564: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:22:46.563: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:22:48.563: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:22:50.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:22:52.566: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:22:54.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:22:56.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:22:58.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:00.564: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:02.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:04.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:06.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:08.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:10.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:12.563: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:14.564: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:16.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:18.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:20.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:22.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:24.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:26.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:28.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:30.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:32.563: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:34.563: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:36.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:38.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:40.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:42.563: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:44.563: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:46.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:48.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:50.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:52.563: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:54.563: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:56.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:58.562: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:58.566: INFO: Controller sample-pod: Failed to GET from replica 1 [nginx-controller-vnk0w]: the server does not allow access to the requested resource (get pods nginx-controller-vnk0w):
Oct 8 16:23:58.566: INFO: error in waiting for pods to come up: failed to wait for pods responding: timed out waiting for the condition
Oct 8 16:23:58.567: INFO: deleting replication controller nginx-controller
[AfterEach] Deployment
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-deployment-qrtsv".
Oct 8 16:23:58.573: INFO: event for nginx-controller-vnk0w: {scheduler } Scheduled: Successfully assigned nginx-controller-vnk0w to 127.0.0.1
Oct 8 16:23:58.573: INFO: event for nginx-controller-vnk0w: {kubelet 127.0.0.1} Pulled: Container image "nginx" already present on machine
Oct 8 16:23:58.573: INFO: event for nginx-controller-vnk0w: {kubelet 127.0.0.1} Created: Created with rkt id bce09c9e
Oct 8 16:23:58.573: INFO: event for nginx-controller-vnk0w: {kubelet 127.0.0.1} Started: Started with rkt id bce09c9e
Oct 8 16:23:58.573: INFO: event for nginx-controller: {replication-controller } SuccessfulCreate: Created pod: nginx-controller-vnk0w
Oct 8 16:23:58.575: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 8 16:23:58.575: INFO: nginx-controller-vnk0w 127.0.0.1 Running [{Ready True 0001-01-01 00:00:00 +0000 UTC 2015-10-08 16:21:57 -0700 PDT }]
Oct 8 16:23:58.575: INFO:
Oct 8 16:23:58.575: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:23:58.576: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:23:58.576: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-deployment-qrtsv" for this suite.
• Failure [138.114 seconds]
Deployment
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/deployment.go:41
deployment should scale up and down in the right order [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/deployment.go:40
Expected error:
<*errors.errorString | 0xc2086691e0>: {
s: "failed to wait for pods responding: timed out waiting for the condition",
}
failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/deployment.go:217
------------------------------
Mesos
applies slave attributes as labels
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/mesos.go:52
[BeforeEach] Mesos
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:24:03.613: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-ran8n
Oct 8 16:24:03.614: INFO: Get service account default in ns e2e-tests-pods-ran8n failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:24:05.616: INFO: Service account default in ns e2e-tests-pods-ran8n with secrets found. (2.002858954s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:24:05.616: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-ran8n
Oct 8 16:24:05.617: INFO: Service account default in ns e2e-tests-pods-ran8n with secrets found. (1.390578ms)
[BeforeEach] Mesos
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/mesos.go:33
[AfterEach] Mesos
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 16:24:05.622: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:24:05.626: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:24:05.627: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-ran8n" for this suite.
S [SKIPPING] in Spec Setup (BeforeEach) [7.024 seconds]
Mesos
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/mesos.go:53
applies slave attributes as labels [BeforeEach]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/mesos.go:52
Oct 8 16:24:05.617: Only supported for providers [mesos/docker] (not local)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:211
------------------------------
Downward API volume
should provide labels and annotations files
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:94
[BeforeEach] Downward API volume
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:24:10.637: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-downward-api-w9vw3
Oct 8 16:24:10.638: INFO: Get service account default in ns e2e-tests-downward-api-w9vw3 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:24:12.639: INFO: Service account default in ns e2e-tests-downward-api-w9vw3 with secrets found. (2.002464382s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:24:12.639: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-downward-api-w9vw3
Oct 8 16:24:12.640: INFO: Service account default in ns e2e-tests-downward-api-w9vw3 with secrets found. (940.107µs)
[It] should provide labels and annotations files
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:94
STEP: Creating a pod to test downward API volume plugin
Oct 8 16:24:12.643: INFO: Waiting up to 5m0s for pod metadata-volume-af22ff08-6e13-11e5-bcd2-28d244b00276 status to be success or failure
Oct 8 16:24:12.645: INFO: No Status.Info for container 'client-container' in pod 'metadata-volume-af22ff08-6e13-11e5-bcd2-28d244b00276' yet
Oct 8 16:24:12.645: INFO: Waiting for pod metadata-volume-af22ff08-6e13-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-downward-api-w9vw3' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.83883ms elapsed)
Oct 8 16:24:14.647: INFO: No Status.Info for container 'client-container' in pod 'metadata-volume-af22ff08-6e13-11e5-bcd2-28d244b00276' yet
Oct 8 16:24:14.647: INFO: Waiting for pod metadata-volume-af22ff08-6e13-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-downward-api-w9vw3' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.004049332s elapsed)
Oct 8 16:24:16.649: INFO: No Status.Info for container 'client-container' in pod 'metadata-volume-af22ff08-6e13-11e5-bcd2-28d244b00276' yet
Oct 8 16:24:16.649: INFO: Waiting for pod metadata-volume-af22ff08-6e13-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-downward-api-w9vw3' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.005941768s elapsed)
Oct 8 16:24:18.651: INFO: Nil State.Terminated for container 'client-container' in pod 'metadata-volume-af22ff08-6e13-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-downward-api-w9vw3' so far
Oct 8 16:24:18.651: INFO: Waiting for pod metadata-volume-af22ff08-6e13-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-downward-api-w9vw3' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.007909313s elapsed)
Oct 8 16:24:20.653: INFO: Nil State.Terminated for container 'client-container' in pod 'metadata-volume-af22ff08-6e13-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-downward-api-w9vw3' so far
Oct 8 16:24:20.653: INFO: Waiting for pod metadata-volume-af22ff08-6e13-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-downward-api-w9vw3' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.00980527s elapsed)
Oct 8 16:24:22.655: INFO: Nil State.Terminated for container 'client-container' in pod 'metadata-volume-af22ff08-6e13-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-downward-api-w9vw3' so far
Oct 8 16:24:22.655: INFO: Waiting for pod metadata-volume-af22ff08-6e13-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-downward-api-w9vw3' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.011722029s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod metadata-volume-af22ff08-6e13-11e5-bcd2-28d244b00276 container client-container: <nil>
STEP: Successfully fetched pod logs:cluster="rack10"
builder="john-doe"
kubernetes.io/config.seen="2015-10-08T16:24:12.682111696-07:00"
kubernetes.io/config.source="api"metadata-volume-af22ff08-6e13-11e5-bcd2-28d244b00276
[AfterEach] Downward API volume
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 16:24:24.716: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:24:24.718: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:24:24.718: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-downward-api-w9vw3" for this suite.
• [SLOW TEST:19.120 seconds]
Downward API volume
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:95
should provide labels and annotations files
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:94
------------------------------
SSH
should SSH to all nodes and run commands
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/ssh.go:97
[BeforeEach] SSH
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/ssh.go:39
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
S [SKIPPING] in Spec Setup (BeforeEach) [0.003 seconds]
SSH
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/ssh.go:98
should SSH to all nodes and run commands [BeforeEach]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/ssh.go:97
Oct 8 16:24:29.754: Only supported for providers [gce gke aws] (not local)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:211
------------------------------
Services
should serve identically named services in different namespaces on different load-balancers
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:860
[BeforeEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:24:29.759: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-ou1tr
Oct 8 16:24:29.761: INFO: Get service account default in ns e2e-tests-services-ou1tr failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:24:31.762: INFO: Service account default in ns e2e-tests-services-ou1tr with secrets found. (2.003002999s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:24:31.762: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-ou1tr
Oct 8 16:24:31.763: INFO: Service account default in ns e2e-tests-services-ou1tr with secrets found. (942.542µs)
[BeforeEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:54
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
[It] should serve identically named services in different namespaces on different load-balancers
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:860
[AfterEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 16:24:31.769: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:24:31.771: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:24:31.771: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-services-ou1tr" for this suite.
[AfterEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:64
S [SKIPPING] [7.044 seconds]
Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:861
should serve identically named services in different namespaces on different load-balancers [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:860
Oct 8 16:24:31.763: Only supported for providers [gce gke aws] (not local)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:211
------------------------------
Monitoring
should verify monitoring pods and all cluster nodes are available on influxdb using heapster.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:48
[BeforeEach] Monitoring
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:44
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
S [SKIPPING] in Spec Setup (BeforeEach) [0.003 seconds]
Monitoring
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:49
should verify monitoring pods and all cluster nodes are available on influxdb using heapster. [BeforeEach]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:48
Oct 8 16:24:36.801: Only supported for providers [gce] (not local)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:211
------------------------------
DaemonRestart
Scheduler should continue assigning pods to nodes across restart
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:303
[BeforeEach] DaemonRestart
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:249
STEP: Skipping test, which is not implemented for local
[It] Scheduler should continue assigning pods to nodes across restart
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:303
Oct 8 16:24:36.805: INFO: WARNING: SSH through the restart config might not work on local
Oct 8 16:24:36.805: INFO: Checking if Daemon kube-scheduler on node is up by polling for a 200 on its /healthz endpoint
[AfterEach] DaemonRestart
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:255
• Failure [5.005 seconds]
DaemonRestart
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:327
Scheduler should continue assigning pods to nodes across restart [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:303
Expected error:
<*errors.errorString | 0xc208c0c600>: {
s: "error getting signer for provider local: 'getSigner(...) not implemented for local'",
}
error getting signer for provider local: 'getSigner(...) not implemented for local'
not to have occurred
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:63
------------------------------
kubelet Clean up pods on node
kubelet should be able to delete 10 pods per node in 1m0s.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:159
[BeforeEach] kubelet
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:24:41.812: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubelet-twegn
Oct 8 16:24:41.813: INFO: Get service account default in ns e2e-tests-kubelet-twegn failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:24:43.814: INFO: Service account default in ns e2e-tests-kubelet-twegn with secrets found. (2.002174985s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:24:43.814: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubelet-twegn
Oct 8 16:24:43.815: INFO: Service account default in ns e2e-tests-kubelet-twegn with secrets found. (1.020563ms)
[BeforeEach] kubelet
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:107
[It] kubelet should be able to delete 10 pods per node in 1m0s.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:159
STEP: Creating a RC of 10 pods and wait until all pods of this RC are running
STEP: creating replication controller cleanup10-c1b843f4-6e13-11e5-bcd2-28d244b00276 in namespace e2e-tests-kubelet-twegn
Oct 8 16:24:43.824: INFO: Created replication controller with name: cleanup10-c1b843f4-6e13-11e5-bcd2-28d244b00276, namespace: e2e-tests-kubelet-twegn, replica count: 10
Oct 8 16:24:53.825: INFO: cleanup10-c1b843f4-6e13-11e5-bcd2-28d244b00276 Pods: 10 out of 10 created, 0 running, 10 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:25:03.825: INFO: cleanup10-c1b843f4-6e13-11e5-bcd2-28d244b00276 Pods: 10 out of 10 created, 0 running, 10 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:25:13.834: INFO: cleanup10-c1b843f4-6e13-11e5-bcd2-28d244b00276 Pods: 10 out of 10 created, 5 running, 5 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:25:23.835: INFO: cleanup10-c1b843f4-6e13-11e5-bcd2-28d244b00276 Pods: 10 out of 10 created, 10 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 16:25:24.835: INFO: Checking pods on node 127.0.0.1 via /runningpods endpoint
Oct 8 16:25:24.856: INFO:
Resource usage on node "127.0.0.1":
container cpu(cores) memory(MB)
"/" 0.793 6029.64
STEP: Deleting the RC
STEP: deleting replication controller cleanup10-c1b843f4-6e13-11e5-bcd2-28d244b00276 in namespace e2e-tests-kubelet-twegn
Oct 8 16:25:26.899: INFO: Deleting RC cleanup10-c1b843f4-6e13-11e5-bcd2-28d244b00276 took: 2.041362029s
Oct 8 16:25:40.908: INFO: Terminating RC cleanup10-c1b843f4-6e13-11e5-bcd2-28d244b00276 pods took: 14.008983298s
Oct 8 16:25:41.908: INFO: Checking pods on node 127.0.0.1 via /runningpods endpoint
Oct 8 16:25:41.911: INFO: Deleting 10 pods on 1 nodes completed in 1.002846232s after the RC was deleted
Oct 8 16:25:41.911: INFO:
CPU usage of containers on node "127.0.0.1":
container 5th% 20th% 50th% 70th% 90th% 95th% 99th%
"/" 0.541 0.692 1.036 1.725 3.988 4.004 4.017
[AfterEach] kubelet
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 16:25:41.911: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:25:41.912: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:25:41.912: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-kubelet-twegn" for this suite.
[AfterEach] kubelet
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:111
• [SLOW TEST:65.108 seconds]
kubelet
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:162
Clean up pods on node
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:161
kubelet should be able to delete 10 pods per node in 1m0s.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:159
------------------------------
EmptyDir volumes
should support (root,0666,tmpfs)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:50
[BeforeEach] EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:25:46.920: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-5y0cy
Oct 8 16:25:46.921: INFO: Get service account default in ns e2e-tests-emptydir-5y0cy failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:25:48.922: INFO: Service account default in ns e2e-tests-emptydir-5y0cy with secrets found. (2.002309343s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:25:48.922: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-5y0cy
Oct 8 16:25:48.923: INFO: Service account default in ns e2e-tests-emptydir-5y0cy with secrets found. (908.785µs)
[It] should support (root,0666,tmpfs)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:50
STEP: Creating a pod to test emptydir 0666 on tmpfs
Oct 8 16:25:48.926: INFO: Waiting up to 5m0s for pod pod-e886956d-6e13-11e5-bcd2-28d244b00276 status to be success or failure
Oct 8 16:25:48.928: INFO: No Status.Info for container 'test-container' in pod 'pod-e886956d-6e13-11e5-bcd2-28d244b00276' yet
Oct 8 16:25:48.928: INFO: Waiting for pod pod-e886956d-6e13-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-5y0cy' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.042173ms elapsed)
Oct 8 16:25:50.929: INFO: No Status.Info for container 'test-container' in pod 'pod-e886956d-6e13-11e5-bcd2-28d244b00276' yet
Oct 8 16:25:50.929: INFO: Waiting for pod pod-e886956d-6e13-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-5y0cy' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.003872185s elapsed)
Oct 8 16:25:52.931: INFO: No Status.Info for container 'test-container' in pod 'pod-e886956d-6e13-11e5-bcd2-28d244b00276' yet
Oct 8 16:25:52.931: INFO: Waiting for pod pod-e886956d-6e13-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-5y0cy' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.005404593s elapsed)
Oct 8 16:25:54.933: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-e886956d-6e13-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-5y0cy' so far
Oct 8 16:25:54.933: INFO: Waiting for pod pod-e886956d-6e13-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-5y0cy' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.007186557s elapsed)
Oct 8 16:25:56.935: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-e886956d-6e13-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-5y0cy' so far
Oct 8 16:25:56.935: INFO: Waiting for pod pod-e886956d-6e13-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-5y0cy' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.009206113s elapsed)
Oct 8 16:25:58.936: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-e886956d-6e13-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-5y0cy' so far
Oct 8 16:25:58.936: INFO: Waiting for pod pod-e886956d-6e13-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-5y0cy' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.010922505s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-e886956d-6e13-11e5-bcd2-28d244b00276 container test-container: <nil>
STEP: Successfully fetched pod logs:
[AfterEach] EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-emptydir-5y0cy".
Oct 8 16:26:01.029: INFO: event for pod-e886956d-6e13-11e5-bcd2-28d244b00276: {scheduler } Scheduled: Successfully assigned pod-e886956d-6e13-11e5-bcd2-28d244b00276 to 127.0.0.1
Oct 8 16:26:01.029: INFO: event for pod-e886956d-6e13-11e5-bcd2-28d244b00276: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/mounttest:0.4" already present on machine
Oct 8 16:26:01.029: INFO: event for pod-e886956d-6e13-11e5-bcd2-28d244b00276: {kubelet 127.0.0.1} Created: Created with rkt id a4f4da21
Oct 8 16:26:01.029: INFO: event for pod-e886956d-6e13-11e5-bcd2-28d244b00276: {kubelet 127.0.0.1} Started: Started with rkt id a4f4da21
Oct 8 16:26:01.031: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 8 16:26:01.031: INFO:
Oct 8 16:26:01.031: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:26:01.034: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:26:01.034: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-5y0cy" for this suite.
• Failure [19.162 seconds]
EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (root,0666,tmpfs) [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:50
"perms of file \"/test-volume/test-file\": -rw-rw-rw-" in container output
Expected
<string>:
to contain substring
<string>: perms of file "/test-volume/test-file": -rw-rw-rw-
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1139
------------------------------
Pods
should *not* be restarted with a /healthz http liveness probe
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:603
[BeforeEach] Pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:26:06.082: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-wa7bj
Oct 8 16:26:06.083: INFO: Get service account default in ns e2e-tests-pods-wa7bj failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:26:08.087: INFO: Service account default in ns e2e-tests-pods-wa7bj with secrets found. (2.004474567s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:26:08.087: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-wa7bj
Oct 8 16:26:08.090: INFO: Service account default in ns e2e-tests-pods-wa7bj with secrets found. (3.472877ms)
[It] should *not* be restarted with a /healthz http liveness probe
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:603
STEP: Creating pod liveness-http in namespace e2e-tests-pods-wa7bj
Oct 8 16:26:08.101: INFO: Waiting up to 5m0s for pod liveness-http status to be !pending
Oct 8 16:26:08.108: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-wa7bj' status to be '!pending'(found phase: "Pending", readiness: false) (6.240527ms elapsed)
Oct 8 16:26:10.110: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-wa7bj' status to be '!pending'(found phase: "Pending", readiness: false) (2.008298189s elapsed)
Oct 8 16:26:12.111: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-wa7bj' status to be '!pending'(found phase: "Pending", readiness: false) (4.009870319s elapsed)
Oct 8 16:26:14.114: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-wa7bj' status to be '!pending'(found phase: "Pending", readiness: false) (6.012278984s elapsed)
Oct 8 16:26:16.116: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-wa7bj' status to be '!pending'(found phase: "Pending", readiness: false) (8.0142793s elapsed)
Oct 8 16:26:18.118: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-wa7bj' status to be '!pending'(found phase: "Pending", readiness: false) (10.016239573s elapsed)
Oct 8 16:26:20.120: INFO: Saw pod 'liveness-http' in namespace 'e2e-tests-pods-wa7bj' out of pending state (found '"Running"')
STEP: Started pod liveness-http in namespace e2e-tests-pods-wa7bj
STEP: checking the pod's current state and verifying that restartCount is present
STEP: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] Pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 16:28:20.296: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:28:20.298: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:28:20.298: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-wa7bj" for this suite.
• [SLOW TEST:139.299 seconds]
Pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:778
should *not* be restarted with a /healthz http liveness probe
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:603
------------------------------
Etcd failure
should recover from network partition with master
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/etcd_failure.go:71
[BeforeEach] Etcd failure
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/etcd_failure.go:55
[AfterEach] Etcd failure
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/etcd_failure.go:63
S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds]
Etcd failure
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/etcd_failure.go:80
should recover from network partition with master [BeforeEach]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/etcd_failure.go:71
Oct 8 16:28:25.380: Only supported for providers [gce] (not local)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:211
------------------------------
S
------------------------------
Kubectl client Proxy server
should support --unix-socket=/path
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:680
[BeforeEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:77
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
Oct 8 16:28:25.390: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-4o0hc
Oct 8 16:28:25.392: INFO: Get service account default in ns e2e-tests-kubectl-4o0hc failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:28:27.393: INFO: Service account default in ns e2e-tests-kubectl-4o0hc with secrets found. (2.002876464s)
[It] should support --unix-socket=/path
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:680
STEP: Starting the proxy
Oct 8 16:28:27.393: INFO: Asynchronously running '/home/yifan/google-cloud-sdk/bin/kubectl kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth proxy --unix-socket=/tmp/kubectl-proxy-unix532945971/test'
[AfterEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:84
STEP: Destroying namespace for this suite e2e-tests-kubectl-4o0hc
• Failure [7.034 seconds]
Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:682
Proxy server
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:681
should support --unix-socket=/path [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:680
Oct 8 16:28:27.403: Expected output from kubectl proxy: EOF
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:673
------------------------------
EmptyDir volumes
should support (root,0777,tmpfs)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:54
[BeforeEach] EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:28:32.419: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-62tua
Oct 8 16:28:32.420: INFO: Get service account default in ns e2e-tests-emptydir-62tua failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:28:34.422: INFO: Service account default in ns e2e-tests-emptydir-62tua with secrets found. (2.00264412s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:28:34.422: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-62tua
Oct 8 16:28:34.423: INFO: Service account default in ns e2e-tests-emptydir-62tua with secrets found. (1.241615ms)
[It] should support (root,0777,tmpfs)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:54
STEP: Creating a pod to test emptydir 0777 on tmpfs
Oct 8 16:28:34.441: INFO: Waiting up to 5m0s for pod pod-4b2bd6f2-6e14-11e5-bcd2-28d244b00276 status to be success or failure
Oct 8 16:28:34.444: INFO: No Status.Info for container 'test-container' in pod 'pod-4b2bd6f2-6e14-11e5-bcd2-28d244b00276' yet
Oct 8 16:28:34.444: INFO: Waiting for pod pod-4b2bd6f2-6e14-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-62tua' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.995101ms elapsed)
Oct 8 16:28:36.447: INFO: No Status.Info for container 'test-container' in pod 'pod-4b2bd6f2-6e14-11e5-bcd2-28d244b00276' yet
Oct 8 16:28:36.447: INFO: Waiting for pod pod-4b2bd6f2-6e14-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-62tua' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.005927318s elapsed)
Oct 8 16:28:38.449: INFO: No Status.Info for container 'test-container' in pod 'pod-4b2bd6f2-6e14-11e5-bcd2-28d244b00276' yet
Oct 8 16:28:38.449: INFO: Waiting for pod pod-4b2bd6f2-6e14-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-62tua' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.007798526s elapsed)
Oct 8 16:28:40.451: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-4b2bd6f2-6e14-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-62tua' so far
Oct 8 16:28:40.451: INFO: Waiting for pod pod-4b2bd6f2-6e14-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-62tua' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.009584854s elapsed)
Oct 8 16:28:42.453: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-4b2bd6f2-6e14-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-62tua' so far
Oct 8 16:28:42.453: INFO: Waiting for pod pod-4b2bd6f2-6e14-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-62tua' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.01138817s elapsed)
Oct 8 16:28:44.455: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-4b2bd6f2-6e14-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-62tua' so far
Oct 8 16:28:44.455: INFO: Waiting for pod pod-4b2bd6f2-6e14-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-62tua' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.013403338s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-4b2bd6f2-6e14-11e5-bcd2-28d244b00276 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rwxrwxrwx
[AfterEach] EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 16:28:46.560: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:28:46.562: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:28:46.562: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-62tua" for this suite.
• [SLOW TEST:19.181 seconds]
EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (root,0777,tmpfs)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:54
------------------------------
Probing container
with readiness probe that fails should never be ready and never restart
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:99
[BeforeEach] Probing container
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:39
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:28:51.606: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-container-probe-2do7g
Oct 8 16:28:51.608: INFO: Get service account default in ns e2e-tests-container-probe-2do7g failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:28:53.609: INFO: Service account default in ns e2e-tests-container-probe-2do7g with secrets found. (2.003296269s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:28:53.609: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-container-probe-2do7g
Oct 8 16:28:53.610: INFO: Service account default in ns e2e-tests-container-probe-2do7g with secrets found. (944.98µs)
[It] with readiness probe that fails should never be ready and never restart
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:99
[AfterEach] Probing container
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:41
Oct 8 16:30:23.622: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:30:23.624: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:30:23.624: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-container-probe-2do7g" for this suite.
• [SLOW TEST:97.036 seconds]
Probing container
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:101
with readiness probe that fails should never be ready and never restart
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:99
------------------------------
Kubectl client Kubectl api-versions
should check if v1 is in available api versions
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:226
[BeforeEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:77
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
Oct 8 16:30:28.635: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-9a48t
Oct 8 16:30:28.636: INFO: Get service account default in ns e2e-tests-kubectl-9a48t failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:30:30.638: INFO: Service account default in ns e2e-tests-kubectl-9a48t with secrets found. (2.002270955s)
[It] should check if v1 is in available api versions
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:226
STEP: validating api verions
Oct 8 16:30:30.638: INFO: Running '/home/yifan/google-cloud-sdk/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth api-versions'
Oct 8 16:30:30.648: INFO: Available Server Api Versions: v1
[AfterEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:84
STEP: Destroying namespace for this suite e2e-tests-kubectl-9a48t
• [SLOW TEST:7.021 seconds]
Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:682
Kubectl api-versions
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:227
should check if v1 is in available api versions
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:226
------------------------------
Job
should stop a job
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:182
[BeforeEach] Job
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:30:35.658: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-job-x3bt5
Oct 8 16:30:35.659: INFO: Get service account default in ns e2e-tests-job-x3bt5 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:30:37.660: INFO: Service account default in ns e2e-tests-job-x3bt5 with secrets found. (2.002411813s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:30:37.660: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-job-x3bt5
Oct 8 16:30:37.661: INFO: Service account default in ns e2e-tests-job-x3bt5 with secrets found. (899.425µs)
[It] should stop a job
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:182
STEP: Creating a job
[AfterEach] Job
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-job-x3bt5".
Oct 8 16:30:37.666: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 8 16:30:37.666: INFO:
Oct 8 16:30:37.666: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 16:30:37.668: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 16:30:37.668: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-job-x3bt5" for this suite.
• Failure [7.020 seconds]
Job
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:183
should stop a job [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:182
Expected error:
<*errors.StatusError | 0xc20897a380>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:165
------------------------------
Kubectl client Simple pod
should support port-forward
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:213
[BeforeEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:77
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
Oct 8 16:30:42.677: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-w75pd
Oct 8 16:30:42.678: INFO: Get service account default in ns e2e-tests-kubectl-w75pd failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:30:44.679: INFO: Service account default in ns e2e-tests-kubectl-w75pd with secrets found. (2.002506054s)
[BeforeEach] Simple pod
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:153
STEP: creating the pod
Oct 8 16:30:44.679: INFO: Running '/home/yifan/google-cloud-sdk/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth create -f /home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/pod.yaml --namespace=e2e-tests-kubectl-w75pd'
[AfterEach] Simple pod
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:156
STEP: using delete to clean up resources
Oct 8 16:30:44.693: INFO: Running '/home/yifan/google-cloud-sdk/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth stop --grace-period=0 -f /home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/pod.yaml --namespace=e2e-tests-kubectl-w75pd'
[AfterEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:84
STEP: Destroying namespace for this suite e2e-tests-kubectl-w75pd
• Failure in Spec Setup (BeforeEach) [7.040 seconds]
Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:682
Simple pod
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:214
should support port-forward [BeforeEach]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:213
Oct 8 16:30:44.690: Error running &{/home/yifan/google-cloud-sdk/bin/kubectl [kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth create -f /home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/pod.yaml --namespace=e2e-tests-kubectl-w75pd] [] <nil> the path "/home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/pod.yaml" does not exist
[] <nil> 0xc208944f00 exit status 1 <nil> true [0xc2088ce988 0xc2088ce9a8 0xc2088ce9f0] [0xc2088ce988 0xc2088ce9a8 0xc2088ce9f0] [0xc2088ce9a0 0xc2088ce9c8] [0x6bd870 0x6bd870] 0xc208622840}:
Command stdout:
stderr:
the path "/home/yifan/gopher/src/github.com/coreos/kubernetes/docs/user-guide/pod.yaml" does not exist
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1046
------------------------------
Kubelet regular resource usage tracking
over 30m0s with 0 pods per node.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:129
[BeforeEach] Kubelet
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 16:30:49.717: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubelet-perf-39tke
Oct 8 16:30:49.721: INFO: Get service account default in ns e2e-tests-kubelet-perf-39tke failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 16:30:51.722: INFO: Service account default in ns e2e-tests-kubelet-perf-39tke with secrets found. (2.00557126s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 16:30:51.722: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubelet-perf-39tke
Oct 8 16:30:51.724: INFO: Service account default in ns e2e-tests-kubelet-perf-39tke with secrets found. (1.10719ms)
[BeforeEach] Kubelet
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:115
[It] over 30m0s with 0 pods per node.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:129
STEP: Creating a RC of 0 pods and wait until all pods of this RC are running
STEP: creating replication controller resource0-9d02b251-6e14-11e5-bcd2-28d244b00276 in namespace e2e-tests-kubelet-perf-39tke
Oct 8 16:30:51.729: INFO: Created replication controller with name: resource0-9d02b251-6e14-11e5-bcd2-28d244b00276, namespace: e2e-tests-kubelet-perf-39tke, replica count: 0
Oct 8 16:30:51.729: INFO: Resource usage on node "127.0.0.1" is not ready yet
STEP: Start monitoring resource usage
Oct 8 16:30:51.729: INFO: Still running...29m59.999999855s left
Oct 8 16:35:51.729: INFO: Still running...24m59.999888404s left
Oct 8 16:40:51.740: INFO: 0 pods are running on node 127.0.0.1
Oct 8 16:40:51.740: INFO: Still running...19m59.989518005s left
Oct 8 16:45:51.740: INFO: Still running...14m59.989431262s left
Oct 8 16:50:51.752: INFO: 0 pods are running on node 127.0.0.1
Oct 8 16:50:51.752: INFO: Still running...9m59.977017085s left
Oct 8 16:55:51.752: INFO: Still running...4m59.976899091s left
Oct 8 17:00:51.739: INFO: 0 pods are running on node 127.0.0.1
STEP: Reporting overall resource usage
Oct 8 17:00:51.741: INFO: 0 pods are running on node 127.0.0.1
Oct 8 17:00:51.742: INFO:
CPU usage of containers on node "127.0.0.1":
container 5th% 20th% 50th% 70th% 90th% 95th% 99th%
"/" 0.331 0.376 0.507 0.588 0.823 1.098 1.778
Oct 8 17:00:51.742: INFO:
Resource usage on node "127.0.0.1":
container cpu(cores) memory(MB)
"/" 2.223 6291.26
STEP: Deleting the RC
STEP: deleting replication controller resource0-9d02b251-6e14-11e5-bcd2-28d244b00276 in namespace e2e-tests-kubelet-perf-39tke
Oct 8 17:00:53.756: INFO: Deleting RC resource0-9d02b251-6e14-11e5-bcd2-28d244b00276 took: 2.012737718s
Oct 8 17:00:53.757: INFO: Terminating RC resource0-9d02b251-6e14-11e5-bcd2-28d244b00276 pods took: 1.045808ms
[AfterEach] Kubelet
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 17:00:53.757: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 17:00:53.758: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 17:00:53.758: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-kubelet-perf-39tke" for this suite.
[AfterEach] Kubelet
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:119
• [SLOW TEST:1809.084 seconds]
Kubelet
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:143
regular resource usage tracking
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:131
over 30m0s with 0 pods per node.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:129
------------------------------
Docker Containers
should be able to override the image's default commmand (docker entrypoint)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:73
[BeforeEach] Docker Containers
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:41
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
Oct 8 17:00:58.801: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-containers-6ysp4
Oct 8 17:00:58.806: INFO: Get service account default in ns e2e-tests-containers-6ysp4 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 17:01:00.808: INFO: Service account default in ns e2e-tests-containers-6ysp4 with secrets found. (2.006631482s)
[It] should be able to override the image's default commmand (docker entrypoint)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:73
STEP: Creating a pod to test override command
Oct 8 17:01:00.810: INFO: Waiting up to 5m0s for pod client-containers-d34e9e77-6e18-11e5-bcd2-28d244b00276 status to be success or failure
Oct 8 17:01:00.814: INFO: No Status.Info for container 'test-container' in pod 'client-containers-d34e9e77-6e18-11e5-bcd2-28d244b00276' yet
Oct 8 17:01:00.815: INFO: Waiting for pod client-containers-d34e9e77-6e18-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-containers-6ysp4' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.403763ms elapsed)
Oct 8 17:01:02.819: INFO: No Status.Info for container 'test-container' in pod 'client-containers-d34e9e77-6e18-11e5-bcd2-28d244b00276' yet
Oct 8 17:01:02.819: INFO: Waiting for pod client-containers-d34e9e77-6e18-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-containers-6ysp4' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.008859431s elapsed)
Oct 8 17:01:04.820: INFO: No Status.Info for container 'test-container' in pod 'client-containers-d34e9e77-6e18-11e5-bcd2-28d244b00276' yet
Oct 8 17:01:04.820: INFO: Waiting for pod client-containers-d34e9e77-6e18-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-containers-6ysp4' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.010314307s elapsed)
Oct 8 17:01:06.823: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-d34e9e77-6e18-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-containers-6ysp4' so far
Oct 8 17:01:06.823: INFO: Waiting for pod client-containers-d34e9e77-6e18-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-containers-6ysp4' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.012764888s elapsed)
Oct 8 17:01:08.825: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-d34e9e77-6e18-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-containers-6ysp4' so far
Oct 8 17:01:08.825: INFO: Waiting for pod client-containers-d34e9e77-6e18-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-containers-6ysp4' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.014522623s elapsed)
Oct 8 17:01:10.827: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-d34e9e77-6e18-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-containers-6ysp4' so far
Oct 8 17:01:10.827: INFO: Waiting for pod client-containers-d34e9e77-6e18-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-containers-6ysp4' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.016875445s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod client-containers-d34e9e77-6e18-11e5-bcd2-28d244b00276 container test-container: <nil>
STEP: Successfully fetched pod logs:[/ep-2]
[AfterEach] Docker Containers
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:47
• [SLOW TEST:19.172 seconds]
Docker Containers
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:84
should be able to override the image's default commmand (docker entrypoint)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:73
------------------------------
S
------------------------------
Services
should prevent NodePort collisions
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:638
[BeforeEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 17:01:17.973: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-jmrl1
Oct 8 17:01:17.974: INFO: Get service account default in ns e2e-tests-services-jmrl1 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 17:01:19.975: INFO: Service account default in ns e2e-tests-services-jmrl1 with secrets found. (2.002367269s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 17:01:19.975: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-jmrl1
Oct 8 17:01:19.976: INFO: Service account default in ns e2e-tests-services-jmrl1 with secrets found. (1.249852ms)
[BeforeEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:54
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
[It] should prevent NodePort collisions
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:638
STEP: creating service nodeport-collision-1 with type NodePort in namespace e2e-tests-services-jmrl1
STEP: creating service nodeport-collision-2 with conflicting NodePort
STEP: deleting service nodeport-collision-1 to release NodePort
STEP: creating service nodeport-collision-2 with no-longer-conflicting NodePort
STEP: deleting service nodeport-collision-2 in namespace e2e-tests-services-jmrl1
[AfterEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 17:01:20.496: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 17:01:20.497: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 17:01:20.497: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-services-jmrl1" for this suite.
[AfterEach] Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:64
• [SLOW TEST:7.564 seconds]
Services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:861
should prevent NodePort collisions
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service.go:638
------------------------------
EmptyDir volumes
volume on tmpfs should have the correct mode
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:42
[BeforeEach] EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 17:01:25.536: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-djkrh
Oct 8 17:01:25.538: INFO: Get service account default in ns e2e-tests-emptydir-djkrh failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 17:01:27.539: INFO: Service account default in ns e2e-tests-emptydir-djkrh with secrets found. (2.002755859s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 17:01:27.539: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-djkrh
Oct 8 17:01:27.540: INFO: Service account default in ns e2e-tests-emptydir-djkrh with secrets found. (1.044295ms)
[It] volume on tmpfs should have the correct mode
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:42
STEP: Creating a pod to test emptydir volume type on tmpfs
Oct 8 17:01:27.543: INFO: Waiting up to 5m0s for pod pod-e33da492-6e18-11e5-bcd2-28d244b00276 status to be success or failure
Oct 8 17:01:27.545: INFO: No Status.Info for container 'test-container' in pod 'pod-e33da492-6e18-11e5-bcd2-28d244b00276' yet
Oct 8 17:01:27.545: INFO: Waiting for pod pod-e33da492-6e18-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-djkrh' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.452862ms elapsed)
Oct 8 17:01:29.547: INFO: No Status.Info for container 'test-container' in pod 'pod-e33da492-6e18-11e5-bcd2-28d244b00276' yet
Oct 8 17:01:29.547: INFO: Waiting for pod pod-e33da492-6e18-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-djkrh' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.004104477s elapsed)
Oct 8 17:01:31.548: INFO: No Status.Info for container 'test-container' in pod 'pod-e33da492-6e18-11e5-bcd2-28d244b00276' yet
Oct 8 17:01:31.548: INFO: Waiting for pod pod-e33da492-6e18-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-djkrh' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.005764321s elapsed)
Oct 8 17:01:33.550: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-e33da492-6e18-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-djkrh' so far
Oct 8 17:01:33.550: INFO: Waiting for pod pod-e33da492-6e18-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-djkrh' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.007768803s elapsed)
Oct 8 17:01:35.552: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-e33da492-6e18-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-djkrh' so far
Oct 8 17:01:35.552: INFO: Waiting for pod pod-e33da492-6e18-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-djkrh' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.009850775s elapsed)
Oct 8 17:01:37.554: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-e33da492-6e18-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-djkrh' so far
Oct 8 17:01:37.554: INFO: Waiting for pod pod-e33da492-6e18-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-djkrh' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.011731268s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-e33da492-6e18-11e5-bcd2-28d244b00276 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs
perms of file "/test-volume": -rwxrwxrwx
[AfterEach] EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 17:01:39.628: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 17:01:39.631: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 17:01:39.631: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-djkrh" for this suite.
• [SLOW TEST:19.147 seconds]
EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
volume on tmpfs should have the correct mode
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:42
------------------------------
Job
should scale a job down
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:158
[BeforeEach] Job
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 17:01:44.687: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-job-segn6
Oct 8 17:01:44.688: INFO: Get service account default in ns e2e-tests-job-segn6 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 17:01:46.690: INFO: Service account default in ns e2e-tests-job-segn6 with secrets found. (2.002649694s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 17:01:46.690: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-job-segn6
Oct 8 17:01:46.691: INFO: Service account default in ns e2e-tests-job-segn6 with secrets found. (975.975µs)
[It] should scale a job down
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:158
STEP: Creating a job
[AfterEach] Job
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-job-segn6".
Oct 8 17:01:46.696: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 8 17:01:46.696: INFO:
Oct 8 17:01:46.696: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 17:01:46.697: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 17:01:46.697: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-job-segn6" for this suite.
• Failure [7.021 seconds]
Job
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:183
should scale a job down [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:158
Expected error:
<*errors.StatusError | 0xc208789680>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:141
------------------------------
S
------------------------------
ServiceAccounts
should mount an API token into pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:93
[BeforeEach] ServiceAccounts
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 17:01:51.705: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-svcaccounts-1nqim
Oct 8 17:01:51.707: INFO: Get service account default in ns e2e-tests-svcaccounts-1nqim failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 17:01:53.709: INFO: Service account default in ns e2e-tests-svcaccounts-1nqim with secrets found. (2.004078104s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 17:01:53.709: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-svcaccounts-1nqim
Oct 8 17:01:53.710: INFO: Service account default in ns e2e-tests-svcaccounts-1nqim with secrets found. (1.161198ms)
[It] should mount an API token into pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:93
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Oct 8 17:01:54.220: INFO: Waiting up to 5m0s for pod pod-service-account-f323852c-6e18-11e5-bcd2-28d244b00276 status to be success or failure
Oct 8 17:01:54.222: INFO: No Status.Info for container 'token-test' in pod 'pod-service-account-f323852c-6e18-11e5-bcd2-28d244b00276' yet
Oct 8 17:01:54.222: INFO: Waiting for pod pod-service-account-f323852c-6e18-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-svcaccounts-1nqim' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.765388ms elapsed)
Oct 8 17:01:56.224: INFO: No Status.Info for container 'token-test' in pod 'pod-service-account-f323852c-6e18-11e5-bcd2-28d244b00276' yet
Oct 8 17:01:56.224: INFO: Waiting for pod pod-service-account-f323852c-6e18-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-svcaccounts-1nqim' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.004002582s elapsed)
Oct 8 17:01:58.226: INFO: No Status.Info for container 'token-test' in pod 'pod-service-account-f323852c-6e18-11e5-bcd2-28d244b00276' yet
Oct 8 17:01:58.226: INFO: Waiting for pod pod-service-account-f323852c-6e18-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-svcaccounts-1nqim' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.005455203s elapsed)
Oct 8 17:02:00.232: INFO: No Status.Info for container 'token-test' in pod 'pod-service-account-f323852c-6e18-11e5-bcd2-28d244b00276' yet
Oct 8 17:02:00.232: INFO: Waiting for pod pod-service-account-f323852c-6e18-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-svcaccounts-1nqim' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.012230989s elapsed)
Oct 8 17:02:02.235: INFO: Nil State.Terminated for container 'token-test' in pod 'pod-service-account-f323852c-6e18-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-svcaccounts-1nqim' so far
Oct 8 17:02:02.235: INFO: Waiting for pod pod-service-account-f323852c-6e18-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-svcaccounts-1nqim' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.014995775s elapsed)
Oct 8 17:02:04.238: INFO: Nil State.Terminated for container 'token-test' in pod 'pod-service-account-f323852c-6e18-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-svcaccounts-1nqim' so far
Oct 8 17:02:04.239: INFO: Waiting for pod pod-service-account-f323852c-6e18-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-svcaccounts-1nqim' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.018250459s elapsed)
STEP: Saw pod success
Oct 8 17:02:06.241: INFO: Waiting up to 5m0s for pod pod-service-account-f323852c-6e18-11e5-bcd2-28d244b00276 status to be success or failure
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-service-account-f323852c-6e18-11e5-bcd2-28d244b00276 container token-test: <nil>
STEP: Successfully fetched pod logs:
[AfterEach] ServiceAccounts
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-svcaccounts-1nqim".
Oct 8 17:02:06.324: INFO: event for pod-service-account-f323852c-6e18-11e5-bcd2-28d244b00276: {scheduler } Scheduled: Successfully assigned pod-service-account-f323852c-6e18-11e5-bcd2-28d244b00276 to 127.0.0.1
Oct 8 17:02:06.324: INFO: event for pod-service-account-f323852c-6e18-11e5-bcd2-28d244b00276: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/mounttest:0.2" already present on machine
Oct 8 17:02:06.324: INFO: event for pod-service-account-f323852c-6e18-11e5-bcd2-28d244b00276: {kubelet 127.0.0.1} Created: Created with rkt id 28e3576d
Oct 8 17:02:06.324: INFO: event for pod-service-account-f323852c-6e18-11e5-bcd2-28d244b00276: {kubelet 127.0.0.1} Started: Started with rkt id 28e3576d
Oct 8 17:02:06.326: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 8 17:02:06.326: INFO:
Oct 8 17:02:06.326: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 17:02:06.327: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 17:02:06.327: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-svcaccounts-1nqim" for this suite.
• Failure [19.644 seconds]
ServiceAccounts
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:94
should mount an API token into pods [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:93
"content of file \"/var/run/secrets/kubernetes.io/serviceaccount/token\": eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJlMmUtdGVzdHMtc3ZjYWNjb3VudHMtMW5xaW0iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiZGVmYXVsdC10b2tlbi1tczgzZiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZjFhNTBiY2EtNmUxOC0xMWU1LTk1NmMtMjhkMjQ0YjAwMjc2Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmUyZS10ZXN0cy1zdmNhY2NvdW50cy0xbnFpbTpkZWZhdWx0In0.LbIrAI0fxo0r4wpLr39NUWc5RE-s4DMPZBM3itbWCXFAWUulGe9p-qhWblDWKBlgG-TQ9M1Jcw1Mvy7RqfZUDIAA4yrMJsKGlq6_kBHlV7TJMES9e_6b5931MBpfq0dsMpqd-6IPh3pi6CZVmEAomWDUWEDZP46hxSYTcfmuIbiDHRC2QxZW5tfFghCel43o3QVwAWxNW4ItcxbRkTzxXSOblSSUQGZXqfiY_DQDnP9wMgp-bfjJbdoTpWSlUFM4ialfOfVn2VZhTcexSi84T10qnn9EFFYg4auHHMsY877YTFfvunLXPYAEiFXk59r9rItYfGBrnqO4DXzOyPNSmw" in container output
Expected
<string>:
to contain substring
<string>: content of file "/var/run/secrets/kubernetes.io/serviceaccount/token": eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJlMmUtdGVzdHMtc3ZjYWNjb3VudHMtMW5xaW0iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiZGVmYXVsdC10b2tlbi1tczgzZiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZjFhNTBiY2EtNmUxOC0xMWU1LTk1NmMtMjhkMjQ0YjAwMjc2Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmUyZS10ZXN0cy1zdmNhY2NvdW50cy0xbnFpbTpkZWZhdWx0In0.LbIrAI0fxo0r4wpLr39NUWc5RE-s4DMPZBM3itbWCXFAWUulGe9p-qhWblDWKBlgG-TQ9M1Jcw1Mvy7RqfZUDIAA4yrMJsKGlq6_kBHlV7TJMES9e_6b5931MBpfq0dsMpqd-6IPh3pi6CZVmEAomWDUWEDZP46hxSYTcfmuIbiDHRC2QxZW5tfFghCel43o3QVwAWxNW4ItcxbRkTzxXSOblSSUQGZXqfiY_DQDnP9wMgp-bfjJbdoTpWSlUFM4ialfOfVn2VZhTcexSi84T10qnn9EFFYg4auHHMsY877YTFfvunLXPYAEiFXk59r9rItYfGBrnqO4DXzOyPNSmw
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1139
------------------------------
SS
------------------------------
Kubectl client Kubectl run pod
should create a pod from an image when restart is Never
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:631
[BeforeEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:77
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
Oct 8 17:02:11.350: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-qyyo8
Oct 8 17:02:11.351: INFO: Get service account default in ns e2e-tests-kubectl-qyyo8 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 17:02:13.352: INFO: Service account default in ns e2e-tests-kubectl-qyyo8 with secrets found. (2.002364825s)
[BeforeEach] Kubectl run pod
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:589
[It] should create a pod from an image when restart is Never
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:631
STEP: running the image nginx
Oct 8 17:02:13.352: INFO: Running '/home/yifan/google-cloud-sdk/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth run e2e-test-nginx-pod --restart=Never --image=nginx --namespace=e2e-tests-kubectl-qyyo8'
[AfterEach] Kubectl run pod
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:593
Oct 8 17:02:13.366: INFO: Running '/home/yifan/google-cloud-sdk/bin/kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth stop pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-qyyo8'
[AfterEach] Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:84
STEP: Destroying namespace for this suite e2e-tests-kubectl-qyyo8
• Failure [7.048 seconds]
Kubectl client
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:682
Kubectl run pod
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:633
should create a pod from an image when restart is Never [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:631
Oct 8 17:02:13.363: Error running &{/home/yifan/google-cloud-sdk/bin/kubectl [kubectl --server=127.0.0.1:8080 --kubeconfig=/home/yifan/.kubernetes_auth run e2e-test-nginx-pod --restart=Never --image=nginx --namespace=e2e-tests-kubectl-qyyo8] [] <nil> Error: unknown flag: --restart
Run 'kubectl help' for usage.
[] <nil> 0xc20891f1e0 exit status 1 <nil> true [0xc2088cf140 0xc2088cf160 0xc2088cf180] [0xc2088cf140 0xc2088cf160 0xc2088cf180] [0xc2088cf158 0xc2088cf178] [0x6bd870 0x6bd870] 0xc20855e180}:
Command stdout:
stderr:
Error: unknown flag: --restart
Run 'kubectl help' for usage.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1046
------------------------------
Daemon set
should run and stop complex daemon
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:181
[BeforeEach] Daemon set
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:65
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 17:02:18.398: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-daemonsets-fuuwb
Oct 8 17:02:18.400: INFO: Get service account default in ns e2e-tests-daemonsets-fuuwb failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 17:02:20.402: INFO: Service account default in ns e2e-tests-daemonsets-fuuwb with secrets found. (2.004004144s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 17:02:20.402: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-daemonsets-fuuwb
Oct 8 17:02:20.404: INFO: Service account default in ns e2e-tests-daemonsets-fuuwb with secrets found. (2.061986ms)
[It] should run and stop complex daemon
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:181
Oct 8 17:02:22.407: INFO: Creating daemon with a node selector daemon-set
[AfterEach] Daemon set
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:71
STEP: Collecting events from namespace "e2e-tests-daemonsets-fuuwb".
Oct 8 17:02:24.420: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 8 17:02:24.420: INFO:
Oct 8 17:02:24.420: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 17:02:24.421: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 17:02:24.421: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-daemonsets-fuuwb" for this suite.
• Failure [11.036 seconds]
Daemon set
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:182
should run and stop complex daemon [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:181
Expected error:
<*errors.StatusError | 0xc208a75300>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:155
------------------------------
Variable Expansion
should allow composing env vars into new env vars
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:67
[BeforeEach] Variable Expansion
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 17:02:29.435: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-var-expansion-sgda1
Oct 8 17:02:29.436: INFO: Get service account default in ns e2e-tests-var-expansion-sgda1 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 17:02:31.437: INFO: Service account default in ns e2e-tests-var-expansion-sgda1 with secrets found. (2.002241785s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 17:02:31.437: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-var-expansion-sgda1
Oct 8 17:02:31.438: INFO: Service account default in ns e2e-tests-var-expansion-sgda1 with secrets found. (945.267µs)
[It] should allow composing env vars into new env vars
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:67
STEP: Creating a pod to test env composition
Oct 8 17:02:31.440: INFO: Waiting up to 5m0s for pod var-expansion-0953adaf-6e19-11e5-bcd2-28d244b00276 status to be success or failure
Oct 8 17:02:31.444: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-0953adaf-6e19-11e5-bcd2-28d244b00276' yet
Oct 8 17:02:31.444: INFO: Waiting for pod var-expansion-0953adaf-6e19-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-var-expansion-sgda1' status to be 'success or failure'(found phase: "Pending", readiness: false) (3.46596ms elapsed)
Oct 8 17:02:33.446: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-0953adaf-6e19-11e5-bcd2-28d244b00276' yet
Oct 8 17:02:33.446: INFO: Waiting for pod var-expansion-0953adaf-6e19-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-var-expansion-sgda1' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.005150919s elapsed)
Oct 8 17:02:35.447: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-0953adaf-6e19-11e5-bcd2-28d244b00276' yet
Oct 8 17:02:35.447: INFO: Waiting for pod var-expansion-0953adaf-6e19-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-var-expansion-sgda1' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.007020067s elapsed)
Oct 8 17:02:37.449: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-0953adaf-6e19-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-var-expansion-sgda1' so far
Oct 8 17:02:37.449: INFO: Waiting for pod var-expansion-0953adaf-6e19-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-var-expansion-sgda1' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.008843659s elapsed)
Oct 8 17:02:39.451: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-0953adaf-6e19-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-var-expansion-sgda1' so far
Oct 8 17:02:39.451: INFO: Waiting for pod var-expansion-0953adaf-6e19-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-var-expansion-sgda1' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.010827293s elapsed)
Oct 8 17:02:41.454: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-0953adaf-6e19-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-var-expansion-sgda1' so far
Oct 8 17:02:41.454: INFO: Waiting for pod var-expansion-0953adaf-6e19-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-var-expansion-sgda1' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.013788009s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod var-expansion-0953adaf-6e19-11e5-bcd2-28d244b00276 container dapi-container: <nil>
STEP: Successfully fetched pod logs:KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.0.0.1:443
FOOBAR=foo-value;;bar-value
USER=root
AC_APP_NAME=dapi-container
SHLVL=1
HOME=/root
LOGNAME=root
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1
BAR=bar-value
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
FOO=foo-value
KUBERNETES_PORT_443_TCP_PROTO=tcp
SHELL=/bin/sh
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443
PWD=/
KUBERNETES_SERVICE_HOST=10.0.0.1
AC_METADATA_URL=
[AfterEach] Variable Expansion
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 17:02:43.526: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 17:02:43.529: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 17:02:43.529: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-var-expansion-sgda1" for this suite.
• [SLOW TEST:19.141 seconds]
Variable Expansion
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:129
should allow composing env vars into new env vars
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/expansion.go:67
------------------------------
Reboot
each node by triggering kernel panic and ensure they function upon restart
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/reboot.go:77
[BeforeEach] Reboot
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/reboot.go:59
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
S [SKIPPING] in Spec Setup (BeforeEach) [0.004 seconds]
Reboot
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/reboot.go:100
each node by triggering kernel panic and ensure they function upon restart [BeforeEach]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/reboot.go:77
Oct 8 17:02:48.573: Only supported for providers [gce gke aws] (not local)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:211
------------------------------
EmptyDir volumes
should support (root,0644,default)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:74
[BeforeEach] EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 17:02:48.580: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-ilbzd
Oct 8 17:02:48.581: INFO: Get service account default in ns e2e-tests-emptydir-ilbzd failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 17:02:50.582: INFO: Service account default in ns e2e-tests-emptydir-ilbzd with secrets found. (2.002510342s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 17:02:50.582: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-ilbzd
Oct 8 17:02:50.583: INFO: Service account default in ns e2e-tests-emptydir-ilbzd with secrets found. (928.963µs)
[It] should support (root,0644,default)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:74
STEP: Creating a pod to test emptydir 0644 on node default medium
Oct 8 17:02:50.585: INFO: Waiting up to 5m0s for pod pod-14bd0037-6e19-11e5-bcd2-28d244b00276 status to be success or failure
Oct 8 17:02:50.587: INFO: No Status.Info for container 'test-container' in pod 'pod-14bd0037-6e19-11e5-bcd2-28d244b00276' yet
Oct 8 17:02:50.587: INFO: Waiting for pod pod-14bd0037-6e19-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-ilbzd' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.325131ms elapsed)
Oct 8 17:02:52.588: INFO: No Status.Info for container 'test-container' in pod 'pod-14bd0037-6e19-11e5-bcd2-28d244b00276' yet
Oct 8 17:02:52.589: INFO: Waiting for pod pod-14bd0037-6e19-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-ilbzd' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.002997848s elapsed)
Oct 8 17:02:54.590: INFO: No Status.Info for container 'test-container' in pod 'pod-14bd0037-6e19-11e5-bcd2-28d244b00276' yet
Oct 8 17:02:54.590: INFO: Waiting for pod pod-14bd0037-6e19-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-ilbzd' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.004693745s elapsed)
Oct 8 17:02:56.592: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-14bd0037-6e19-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-ilbzd' so far
Oct 8 17:02:56.592: INFO: Waiting for pod pod-14bd0037-6e19-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-ilbzd' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.006508546s elapsed)
Oct 8 17:02:58.594: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-14bd0037-6e19-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-ilbzd' so far
Oct 8 17:02:58.594: INFO: Waiting for pod pod-14bd0037-6e19-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-ilbzd' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.008290163s elapsed)
Oct 8 17:03:00.596: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-14bd0037-6e19-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-ilbzd' so far
Oct 8 17:03:00.596: INFO: Waiting for pod pod-14bd0037-6e19-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-ilbzd' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.010117894s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-14bd0037-6e19-11e5-bcd2-28d244b00276 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-r--r--
[AfterEach] EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 17:03:02.673: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 17:03:02.675: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 17:03:02.675: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-ilbzd" for this suite.
• [SLOW TEST:19.145 seconds]
EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (root,0644,default)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:74
------------------------------
S
------------------------------
Service endpoints latency
should not be very high
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:116
[BeforeEach] Service endpoints latency
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 17:03:07.726: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-svc-latency-wgxdt
Oct 8 17:03:07.727: INFO: Get service account default in ns e2e-tests-svc-latency-wgxdt failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 17:03:09.728: INFO: Service account default in ns e2e-tests-svc-latency-wgxdt with secrets found. (2.002303788s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 17:03:09.728: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-svc-latency-wgxdt
Oct 8 17:03:09.730: INFO: Service account default in ns e2e-tests-svc-latency-wgxdt with secrets found. (1.605403ms)
[It] should not be very high
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:116
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-wgxdt
Oct 8 17:03:09.734: INFO: Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-wgxdt, replica count: 1
Oct 8 17:03:10.734: INFO: svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 17:03:11.734: INFO: svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 17:03:12.734: INFO: svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 17:03:13.734: INFO: svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 17:03:14.734: INFO: svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 17:03:15.734: INFO: svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 17:03:16.735: INFO: svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 17:03:17.735: INFO: svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 17:03:18.735: INFO: svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 17:03:19.735: INFO: svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 17:03:20.735: INFO: svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Oct 8 17:03:20.872: INFO: Created: latency-svc-ne49m
Oct 8 17:03:20.910: INFO: Got endpoints: latency-svc-ne49m [74.684601ms]
Oct 8 17:03:21.107: INFO: Created: latency-svc-st6jz
Oct 8 17:03:21.136: INFO: Got endpoints: latency-svc-st6jz [110.782837ms]
Oct 8 17:03:21.139: INFO: Created: latency-svc-v8cqv
Oct 8 17:03:21.172: INFO: Created: latency-svc-6hkix
Oct 8 17:03:21.175: INFO: Got endpoints: latency-svc-v8cqv [147.829117ms]
Oct 8 17:03:21.214: INFO: Got endpoints: latency-svc-6hkix [187.092273ms]
Oct 8 17:03:21.215: INFO: Created: latency-svc-ljwu9
Oct 8 17:03:21.254: INFO: Created: latency-svc-9qhc8
Oct 8 17:03:21.257: INFO: Got endpoints: latency-svc-ljwu9 [230.011046ms]
Oct 8 17:03:21.291: INFO: Got endpoints: latency-svc-9qhc8 [265.806164ms]
Oct 8 17:03:21.295: INFO: Created: latency-svc-ymbuh
Oct 8 17:03:21.330: INFO: Got endpoints: latency-svc-ymbuh [304.498273ms]
Oct 8 17:03:21.333: INFO: Created: latency-svc-jaoa5
Oct 8 17:03:21.372: INFO: Got endpoints: latency-svc-jaoa5 [345.753609ms]
Oct 8 17:03:21.437: INFO: Created: latency-svc-pqssn
Oct 8 17:03:21.476: INFO: Got endpoints: latency-svc-pqssn [449.572776ms]
Oct 8 17:03:21.513: INFO: Created: latency-svc-haq8v
Oct 8 17:03:21.551: INFO: Got endpoints: latency-svc-haq8v [524.811355ms]
Oct 8 17:03:21.552: INFO: Created: latency-svc-n4meg
Oct 8 17:03:21.596: INFO: Created: latency-svc-jlhjj
Oct 8 17:03:21.597: INFO: Got endpoints: latency-svc-n4meg [570.745901ms]
Oct 8 17:03:21.631: INFO: Got endpoints: latency-svc-jlhjj [604.184063ms]
Oct 8 17:03:21.633: INFO: Created: latency-svc-3m2zw
Oct 8 17:03:21.672: INFO: Got endpoints: latency-svc-3m2zw [645.504873ms]
Oct 8 17:03:21.740: INFO: Created: latency-svc-llw86
Oct 8 17:03:21.777: INFO: Got endpoints: latency-svc-llw86 [750.990079ms]
Oct 8 17:03:21.817: INFO: Created: latency-svc-9vkl1
Oct 8 17:03:21.856: INFO: Got endpoints: latency-svc-9vkl1 [523.450264ms]
Oct 8 17:03:21.890: INFO: Created: latency-svc-fuplg
Oct 8 17:03:21.925: INFO: Got endpoints: latency-svc-fuplg [559.802778ms]
Oct 8 17:03:21.996: INFO: Created: latency-svc-kutvi
Oct 8 17:03:22.034: INFO: Got endpoints: latency-svc-kutvi [595.102353ms]
Oct 8 17:03:22.114: INFO: Created: latency-svc-a56rf
Oct 8 17:03:22.148: INFO: Got endpoints: latency-svc-a56rf [516.048578ms]
Oct 8 17:03:22.220: INFO: Created: latency-svc-44xi0
Oct 8 17:03:22.252: INFO: Got endpoints: latency-svc-44xi0 [1.226230382s]
Oct 8 17:03:22.289: INFO: Created: latency-svc-9v2rg
Oct 8 17:03:22.329: INFO: Created: latency-svc-28orr
Oct 8 17:03:22.370: INFO: Created: latency-svc-7ty2d
Oct 8 17:03:22.408: INFO: Created: latency-svc-ouplk
Oct 8 17:03:22.408: INFO: Got endpoints: latency-svc-9v2rg [667.87516ms]
Oct 8 17:03:22.476: INFO: Created: latency-svc-266to
Oct 8 17:03:22.551: INFO: Created: latency-svc-wn1na
Oct 8 17:03:22.590: INFO: Created: latency-svc-uea7v
Oct 8 17:03:22.632: INFO: Created: latency-svc-fjued
Oct 8 17:03:22.670: INFO: Created: latency-svc-wq39f
Oct 8 17:03:22.670: INFO: Got endpoints: latency-svc-28orr [853.253208ms]
Oct 8 17:03:22.708: INFO: Created: latency-svc-x5ep1
Oct 8 17:03:22.778: INFO: Got endpoints: latency-svc-7ty2d [1.751557787s]
Oct 8 17:03:22.778: INFO: Created: latency-svc-2j71a
Oct 8 17:03:22.812: INFO: Got endpoints: latency-svc-ouplk [918.513968ms]
Oct 8 17:03:22.851: INFO: Created: latency-svc-fbiyg
Oct 8 17:03:22.932: INFO: Created: latency-svc-v06fz
Oct 8 17:03:22.968: INFO: Got endpoints: latency-svc-266to [1.043721046s]
Oct 8 17:03:23.041: INFO: Created: latency-svc-mlw1o
Oct 8 17:03:23.111: INFO: Created: latency-svc-l7h6x
Oct 8 17:03:23.156: INFO: Got endpoints: latency-svc-wn1na [1.159222528s]
Oct 8 17:03:23.156: INFO: Created: latency-svc-kzy7p
Oct 8 17:03:23.194: INFO: Created: latency-svc-jplb5
Oct 8 17:03:23.231: INFO: Created: latency-svc-cjvty
Oct 8 17:03:23.231: INFO: Got endpoints: latency-svc-uea7v [1.199543722s]
Oct 8 17:03:23.302: INFO: Got endpoints: latency-svc-fjued [1.634717395s]
Oct 8 17:03:23.303: INFO: Created: latency-svc-pramn
Oct 8 17:03:23.371: INFO: Got endpoints: latency-svc-wq39f [1.258876252s]
Oct 8 17:03:23.412: INFO: Created: latency-svc-vwmcd
Oct 8 17:03:23.449: INFO: Created: latency-svc-csyjp
Oct 8 17:03:23.488: INFO: Created: latency-svc-voqh6
Oct 8 17:03:23.523: INFO: Got endpoints: latency-svc-x5ep1 [1.374991133s]
Oct 8 17:03:23.594: INFO: Created: latency-svc-a116b
Oct 8 17:03:23.629: INFO: Created: latency-svc-s58bm
Oct 8 17:03:23.707: INFO: Created: latency-svc-cgsx8
Oct 8 17:03:23.707: INFO: Got endpoints: latency-svc-2j71a [1.488792301s]
Oct 8 17:03:23.850: INFO: Got endpoints: latency-svc-fbiyg [1.443453499s]
Oct 8 17:03:23.890: INFO: Created: latency-svc-nm066
Oct 8 17:03:24.001: INFO: Got endpoints: latency-svc-v06fz [1.525807514s]
Oct 8 17:03:24.040: INFO: Created: latency-svc-rdzkf
Oct 8 17:03:24.149: INFO: Got endpoints: latency-svc-mlw1o [1.373298344s]
Oct 8 17:03:24.187: INFO: Created: latency-svc-gyp40
Oct 8 17:03:24.223: INFO: Got endpoints: latency-svc-l7h6x [1.369509144s]
Oct 8 17:03:24.299: INFO: Got endpoints: latency-svc-kzy7p [1.593071235s]
Oct 8 17:03:24.338: INFO: Created: latency-svc-cyufa
Oct 8 17:03:24.374: INFO: Got endpoints: latency-svc-jplb5 [1.441207823s]
Oct 8 17:03:24.408: INFO: Created: latency-svc-pb29u
Oct 8 17:03:24.482: INFO: Created: latency-svc-dy35j
Oct 8 17:03:24.555: INFO: Created: latency-svc-e5zte
Oct 8 17:03:24.644: INFO: Got endpoints: latency-svc-cjvty [1.675087838s]
Oct 8 17:03:24.766: INFO: Got endpoints: latency-svc-pramn [1.725426199s]
Oct 8 17:03:24.844: INFO: Created: latency-svc-tuxvr
Oct 8 17:03:24.882: INFO: Got endpoints: latency-svc-vwmcd [1.65340308s]
Oct 8 17:03:24.956: INFO: Created: latency-svc-f4te7
Oct 8 17:03:25.065: INFO: Got endpoints: latency-svc-csyjp [1.761835415s]
Oct 8 17:03:25.065: INFO: Created: latency-svc-etgme
Oct 8 17:03:25.174: INFO: Got endpoints: latency-svc-voqh6 [1.839176963s]
Oct 8 17:03:25.249: INFO: Created: latency-svc-kdjs9
Oct 8 17:03:25.250: INFO: Got endpoints: latency-svc-a116b [1.76365983s]
Oct 8 17:03:25.355: INFO: Created: latency-svc-w0p4y
Oct 8 17:03:25.355: INFO: Got endpoints: latency-svc-s58bm [1.831763673s]
Oct 8 17:03:25.432: INFO: Created: latency-svc-wqmkl
Oct 8 17:03:25.468: INFO: Got endpoints: latency-svc-cgsx8 [1.840255145s]
Oct 8 17:03:25.545: INFO: Created: latency-svc-ybqoc
Oct 8 17:03:25.654: INFO: Created: latency-svc-prjxe
Oct 8 17:03:25.743: INFO: Got endpoints: latency-svc-nm066 [1.92800769s]
Oct 8 17:03:25.926: INFO: Got endpoints: latency-svc-rdzkf [1.961712425s]
Oct 8 17:03:26.035: INFO: Created: latency-svc-3bscs
Oct 8 17:03:26.148: INFO: Got endpoints: latency-svc-gyp40 [2.034814359s]
Oct 8 17:03:26.149: INFO: Created: latency-svc-4rc7g
Oct 8 17:03:26.329: INFO: Created: latency-svc-oft9f
Oct 8 17:03:26.370: INFO: Got endpoints: latency-svc-cyufa [2.108562255s]
Oct 8 17:03:26.476: INFO: Got endpoints: latency-svc-dy35j [2.06936879s]
Oct 8 17:03:26.515: INFO: Got endpoints: latency-svc-e5zte [2.034279889s]
Oct 8 17:03:26.553: INFO: Created: latency-svc-fg8s7
Oct 8 17:03:26.555: INFO: Got endpoints: latency-svc-etgme [1.564366427s]
Oct 8 17:03:26.668: INFO: Got endpoints: latency-svc-f4te7 [1.785960264s]
Oct 8 17:03:26.702: INFO: Created: latency-svc-qqlwx
Oct 8 17:03:26.772: INFO: Created: latency-svc-oujcz
Oct 8 17:03:26.806: INFO: Created: latency-svc-w9j7l
Oct 8 17:03:26.887: INFO: Got endpoints: latency-svc-kdjs9 [1.712210934s]
Oct 8 17:03:26.888: INFO: Created: latency-svc-w9qcv
Oct 8 17:03:26.960: INFO: Got endpoints: latency-svc-pb29u [2.6181965s]
Oct 8 17:03:27.029: INFO: Got endpoints: latency-svc-tuxvr [2.262354081s]
Oct 8 17:03:27.067: INFO: Got endpoints: latency-svc-w0p4y [1.781638321s]
Oct 8 17:03:27.068: INFO: Created: latency-svc-vw874
Oct 8 17:03:27.145: INFO: Created: latency-svc-uoep6
Oct 8 17:03:27.148: INFO: Got endpoints: latency-svc-wqmkl [1.792022627s]
Oct 8 17:03:27.260: INFO: Created: latency-svc-qum3b
Oct 8 17:03:27.329: INFO: Created: latency-svc-vw24t
Oct 8 17:03:27.367: INFO: Created: latency-svc-izyao
Oct 8 17:03:27.367: INFO: Got endpoints: latency-svc-ybqoc [1.898013291s]
Oct 8 17:03:27.556: INFO: Created: latency-svc-qw5sf
Oct 8 17:03:28.094: INFO: Got endpoints: latency-svc-prjxe [2.514512899s]
Oct 8 17:03:28.168: INFO: Got endpoints: latency-svc-3bscs [2.242190322s]
Oct 8 17:03:28.203: INFO: Got endpoints: latency-svc-4rc7g [2.131456887s]
Oct 8 17:03:28.278: INFO: Created: latency-svc-zpfp9
Oct 8 17:03:28.314: INFO: Got endpoints: latency-svc-oft9f [2.061332307s]
Oct 8 17:03:28.390: INFO: Created: latency-svc-32xfo
Oct 8 17:03:28.426: INFO: Created: latency-svc-xbd0c
Oct 8 17:03:28.499: INFO: Created: latency-svc-6xi3p
Oct 8 17:03:28.694: INFO: Got endpoints: latency-svc-fg8s7 [2.21792693s]
Oct 8 17:03:28.878: INFO: Created: latency-svc-6xb03
Oct 8 17:03:28.993: INFO: Got endpoints: latency-svc-qqlwx [2.405906565s]
Oct 8 17:03:29.182: INFO: Created: latency-svc-a9m7n
Oct 8 17:03:29.214: INFO: Got endpoints: latency-svc-oujcz [2.583485543s]
Oct 8 17:03:29.402: INFO: Got endpoints: latency-svc-w9j7l [2.700429204s]
Oct 8 17:03:29.402: INFO: Created: latency-svc-qlwa2
Oct 8 17:03:29.518: INFO: Got endpoints: latency-svc-w9qcv [2.712288099s]
Oct 8 17:03:29.589: INFO: Created: latency-svc-dx3uo
Oct 8 17:03:29.702: INFO: Created: latency-svc-9shm0
Oct 8 17:03:29.795: INFO: Got endpoints: latency-svc-vw874 [2.800575499s]
Oct 8 17:03:29.953: INFO: Got endpoints: latency-svc-uoep6 [2.885622009s]
Oct 8 17:03:29.992: INFO: Created: latency-svc-4hw0c
Oct 8 17:03:30.105: INFO: Got endpoints: latency-svc-qum3b [2.95969314s]
Oct 8 17:03:30.139: INFO: Created: latency-svc-waiiv
Oct 8 17:03:30.175: INFO: Got endpoints: latency-svc-vw24t [2.99253368s]
Oct 8 17:03:30.246: INFO: Got endpoints: latency-svc-izyao [2.986481535s]
Oct 8 17:03:30.285: INFO: Created: latency-svc-ri45h
Oct 8 17:03:30.361: INFO: Created: latency-svc-29wsz
Oct 8 17:03:30.431: INFO: Created: latency-svc-501yq
Oct 8 17:03:30.466: INFO: Got endpoints: latency-svc-qw5sf [2.981448101s]
Oct 8 17:03:30.654: INFO: Created: latency-svc-4tn2f
Oct 8 17:03:30.743: INFO: Got endpoints: latency-svc-zpfp9 [2.539651426s]
Oct 8 17:03:30.851: INFO: Got endpoints: latency-svc-32xfo [2.572174278s]
Oct 8 17:03:30.933: INFO: Got endpoints: latency-svc-xbd0c [2.619778489s]
Oct 8 17:03:30.936: INFO: Created: latency-svc-vvblt
Oct 8 17:03:30.976: INFO: Got endpoints: latency-svc-6xi3p [2.549941314s]
Oct 8 17:03:31.047: INFO: Created: latency-svc-jfc3g
Oct 8 17:03:31.152: INFO: Created: latency-svc-48pzo
Oct 8 17:03:31.194: INFO: Created: latency-svc-h7ktn
Oct 8 17:03:31.247: INFO: Got endpoints: latency-svc-6xb03 [2.443043161s]
Oct 8 17:03:31.351: INFO: Got endpoints: latency-svc-a9m7n [2.248467395s]
Oct 8 17:03:31.422: INFO: Created: latency-svc-meu5c
Oct 8 17:03:31.463: INFO: Got endpoints: latency-svc-qlwa2 [2.137627776s]
Oct 8 17:03:31.538: INFO: Created: latency-svc-3tab3
Oct 8 17:03:31.643: INFO: Created: latency-svc-x2i3c
Oct 8 17:03:31.743: INFO: Got endpoints: latency-svc-dx3uo [2.225555174s]
Oct 8 17:03:31.824: INFO: Got endpoints: latency-svc-9shm0 [2.19796119s]
Oct 8 17:03:31.902: INFO: Got endpoints: latency-svc-4hw0c [1.989827325s]
Oct 8 17:03:31.933: INFO: Created: latency-svc-zhfk7
Oct 8 17:03:32.002: INFO: Created: latency-svc-npz7h
Oct 8 17:03:32.081: INFO: Created: latency-svc-myqqj
Oct 8 17:03:32.119: INFO: Got endpoints: latency-svc-waiiv [2.054969901s]
Oct 8 17:03:32.299: INFO: Created: latency-svc-ow7nz
Oct 8 17:03:32.300: INFO: Got endpoints: latency-svc-ri45h [2.091811884s]
Oct 8 17:03:32.449: INFO: Got endpoints: latency-svc-29wsz [2.165609949s]
Oct 8 17:03:32.485: INFO: Created: latency-svc-5ynfe
Oct 8 17:03:32.522: INFO: Got endpoints: latency-svc-501yq [2.161691658s]
Oct 8 17:03:32.640: INFO: Created: latency-svc-vd0am
Oct 8 17:03:32.676: INFO: Got endpoints: latency-svc-4tn2f [2.099954671s]
Oct 8 17:03:32.715: INFO: Created: latency-svc-0dbg4
Oct 8 17:03:32.859: INFO: Created: latency-svc-wjd69
Oct 8 17:03:32.944: INFO: Got endpoints: latency-svc-vvblt [2.092035627s]
Oct 8 17:03:33.126: INFO: Created: latency-svc-jz4s5
Oct 8 17:03:33.127: INFO: Got endpoints: latency-svc-jfc3g [2.150522741s]
Oct 8 17:03:33.161: INFO: Got endpoints: latency-svc-48pzo [2.11350232s]
Oct 8 17:03:33.272: INFO: Got endpoints: latency-svc-h7ktn [2.192056422s]
Oct 8 17:03:33.344: INFO: Created: latency-svc-a8v74
Oct 8 17:03:33.383: INFO: Created: latency-svc-0a7x9
Oct 8 17:03:33.453: INFO: Created: latency-svc-ui7ve
Oct 8 17:03:33.543: INFO: Got endpoints: latency-svc-meu5c [2.193315514s]
Oct 8 17:03:33.730: INFO: Created: latency-svc-8xv8v
Oct 8 17:03:33.730: INFO: Got endpoints: latency-svc-3tab3 [2.266612638s]
Oct 8 17:03:33.768: INFO: Got endpoints: latency-svc-x2i3c [2.197556571s]
Oct 8 17:03:33.950: INFO: Created: latency-svc-fxhyp
Oct 8 17:03:33.985: INFO: Created: latency-svc-ec2c0
Oct 8 17:03:34.043: INFO: Got endpoints: latency-svc-zhfk7 [2.182857207s]
Oct 8 17:03:34.154: INFO: Got endpoints: latency-svc-npz7h [2.222064936s]
Oct 8 17:03:34.234: INFO: Created: latency-svc-0pv09
Oct 8 17:03:34.236: INFO: Got endpoints: latency-svc-myqqj [2.234127807s]
Oct 8 17:03:34.381: INFO: Created: latency-svc-1vubt
Oct 8 17:03:34.382: INFO: Got endpoints: latency-svc-ow7nz [2.15718053s]
Oct 8 17:03:34.453: INFO: Created: latency-svc-ivwy1
Oct 8 17:03:34.560: INFO: Got endpoints: latency-svc-5ynfe [2.147487217s]
Oct 8 17:03:34.560: INFO: Created: latency-svc-0ld7h
Oct 8 17:03:34.759: INFO: Got endpoints: latency-svc-vd0am [2.19206393s]
Oct 8 17:03:34.761: INFO: Created: latency-svc-5pzx8
Oct 8 17:03:34.867: INFO: Got endpoints: latency-svc-0dbg4 [2.227101349s]
Oct 8 17:03:34.943: INFO: Created: latency-svc-lzwt3
Oct 8 17:03:34.983: INFO: Got endpoints: latency-svc-wjd69 [2.200365832s]
Oct 8 17:03:35.058: INFO: Created: latency-svc-n8kfh
Oct 8 17:03:35.166: INFO: Created: latency-svc-uaqda
Oct 8 17:03:35.166: INFO: Got endpoints: latency-svc-jz4s5 [2.11415534s]
Oct 8 17:03:35.348: INFO: Created: latency-svc-14mfw
Oct 8 17:03:35.444: INFO: Got endpoints: latency-svc-a8v74 [2.209195285s]
Oct 8 17:03:35.519: INFO: Got endpoints: latency-svc-0a7x9 [2.245633416s]
Oct 8 17:03:35.561: INFO: Got endpoints: latency-svc-ui7ve [2.179094151s]
Oct 8 17:03:35.631: INFO: Created: latency-svc-9v22w
Oct 8 17:03:35.664: INFO: Got endpoints: latency-svc-8xv8v [2.011854471s]
Oct 8 17:03:35.740: INFO: Created: latency-svc-2qlec
Oct 8 17:03:35.782: INFO: Created: latency-svc-ncpg7
Oct 8 17:03:35.860: INFO: Created: latency-svc-8k3ek
Oct 8 17:03:35.994: INFO: Got endpoints: latency-svc-fxhyp [2.152539294s]
Oct 8 17:03:36.066: INFO: Got endpoints: latency-svc-ec2c0 [2.189466122s]
Oct 8 17:03:36.184: INFO: Created: latency-svc-oe74d
Oct 8 17:03:36.217: INFO: Got endpoints: latency-svc-0pv09 [2.063695133s]
Oct 8 17:03:36.251: INFO: Created: latency-svc-3573n
Oct 8 17:03:36.403: INFO: Got endpoints: latency-svc-1vubt [2.132615111s]
Oct 8 17:03:36.403: INFO: Created: latency-svc-q85rl
Oct 8 17:03:36.544: INFO: Got endpoints: latency-svc-ivwy1 [2.162408273s]
Oct 8 17:03:36.580: INFO: Created: latency-svc-yp3n4
Oct 8 17:03:36.618: INFO: Got endpoints: latency-svc-0ld7h [2.128132756s]
Oct 8 17:03:36.735: INFO: Created: latency-svc-863cr
Oct 8 17:03:36.768: INFO: Got endpoints: latency-svc-5pzx8 [2.096167449s]
Oct 8 17:03:36.821: INFO: Created: latency-svc-fcj3h
Oct 8 17:03:36.967: INFO: Created: latency-svc-yagoe
Oct 8 17:03:37.045: INFO: Got endpoints: latency-svc-lzwt3 [2.177313695s]
Oct 8 17:03:37.149: INFO: Got endpoints: latency-svc-n8kfh [2.165793294s]
Oct 8 17:03:37.226: INFO: Created: latency-svc-sd6g0
Oct 8 17:03:37.227: INFO: Got endpoints: latency-svc-uaqda [2.135674469s]
Oct 8 17:03:37.340: INFO: Created: latency-svc-qtlnw
Oct 8 17:03:37.374: INFO: Got endpoints: latency-svc-14mfw [2.097080468s]
Oct 8 17:03:37.415: INFO: Created: latency-svc-ae0ud
Oct 8 17:03:37.601: INFO: Created: latency-svc-eq01x
Oct 8 17:03:37.695: INFO: Got endpoints: latency-svc-9v22w [2.135012907s]
Oct 8 17:03:37.854: INFO: Got endpoints: latency-svc-2qlec [2.223495793s]
Oct 8 17:03:37.855: INFO: Got endpoints: latency-svc-ncpg7 [2.189816166s]
Oct 8 17:03:37.894: INFO: Got endpoints: latency-svc-8k3ek [2.112227218s]
Oct 8 17:03:37.940: INFO: Created: latency-svc-hxpue
Oct 8 17:03:38.164: INFO: Created: latency-svc-1gdlz
Oct 8 17:03:38.207: INFO: Created: latency-svc-6c9xq
Oct 8 17:03:38.250: INFO: Got endpoints: latency-svc-oe74d [2.149298859s]
Oct 8 17:03:38.251: INFO: Created: latency-svc-1nch0
Oct 8 17:03:38.335: INFO: Got endpoints: latency-svc-3573n [2.151229554s]
Oct 8 17:03:38.429: INFO: Got endpoints: latency-svc-q85rl [2.109454265s]
Oct 8 17:03:38.467: INFO: Created: latency-svc-9a0sd
Oct 8 17:03:38.557: INFO: Created: latency-svc-fc72s
Oct 8 17:03:38.601: INFO: Got endpoints: latency-svc-yp3n4 [2.090924188s]
Oct 8 17:03:38.648: INFO: Created: latency-svc-x10gd
Oct 8 17:03:38.815: INFO: Created: latency-svc-ikvde
Oct 8 17:03:38.816: INFO: Got endpoints: latency-svc-863cr [2.156683556s]
Oct 8 17:03:38.946: INFO: Got endpoints: latency-svc-fcj3h [2.213019006s]
Oct 8 17:03:39.032: INFO: Created: latency-svc-r6exn
Oct 8 17:03:39.075: INFO: Got endpoints: latency-svc-yagoe [2.189743259s]
Oct 8 17:03:39.162: INFO: Created: latency-svc-s7mia
Oct 8 17:03:39.299: INFO: Created: latency-svc-a1u5o
Oct 8 17:03:39.397: INFO: Got endpoints: latency-svc-sd6g0 [2.248002536s]
Oct 8 17:03:39.527: INFO: Got endpoints: latency-svc-qtlnw [2.263311532s]
Oct 8 17:03:39.571: INFO: Got endpoints: latency-svc-ae0ud [2.233425174s]
Oct 8 17:03:39.619: INFO: Created: latency-svc-gqgej
Oct 8 17:03:39.702: INFO: Got endpoints: latency-svc-eq01x [2.21373812s]
Oct 8 17:03:39.788: INFO: Created: latency-svc-37qz5
Oct 8 17:03:39.835: INFO: Created: latency-svc-86dot
Oct 8 17:03:39.931: INFO: Created: latency-svc-8cpdw
Oct 8 17:03:40.343: INFO: Got endpoints: latency-svc-hxpue [2.491087526s]
Oct 8 17:03:40.471: INFO: Got endpoints: latency-svc-1gdlz [2.488984276s]
Oct 8 17:03:40.517: INFO: Got endpoints: latency-svc-6c9xq [2.488384018s]
Oct 8 17:03:40.560: INFO: Created: latency-svc-x8fam
Oct 8 17:03:40.561: INFO: Got endpoints: latency-svc-1nch0 [2.488238214s]
Oct 8 17:03:40.741: INFO: Created: latency-svc-usfcn
Oct 8 17:03:40.833: INFO: Created: latency-svc-q8nnm
Oct 8 17:03:40.871: INFO: Created: latency-svc-fwlf8
Oct 8 17:03:40.913: INFO: Got endpoints: latency-svc-9a0sd [2.535747784s]
Oct 8 17:03:41.003: INFO: Got endpoints: latency-svc-fc72s [2.539656776s]
Oct 8 17:03:41.048: INFO: Got endpoints: latency-svc-x10gd [2.491279247s]
Oct 8 17:03:41.133: INFO: Created: latency-svc-dsab2
Oct 8 17:03:41.216: INFO: Got endpoints: latency-svc-ikvde [2.489459649s]
Oct 8 17:03:41.260: INFO: Created: latency-svc-lvrmg
Oct 8 17:03:41.304: INFO: Created: latency-svc-8ody0
Oct 8 17:03:41.443: INFO: Created: latency-svc-84dbb
Oct 8 17:03:41.494: INFO: Got endpoints: latency-svc-r6exn [2.548422763s]
Oct 8 17:03:41.621: INFO: Got endpoints: latency-svc-s7mia [2.545985156s]
Oct 8 17:03:41.667: INFO: Got endpoints: latency-svc-a1u5o [2.456272968s]
Oct 8 17:03:41.712: INFO: Created: latency-svc-u1st0
Oct 8 17:03:41.888: INFO: Created: latency-svc-og0ro
Oct 8 17:03:41.931: INFO: Created: latency-svc-70pnm
Oct 8 17:03:41.972: INFO: Got endpoints: latency-svc-gqgej [2.446115273s]
Oct 8 17:03:42.106: INFO: Got endpoints: latency-svc-37qz5 [2.445401507s]
Oct 8 17:03:42.150: INFO: Got endpoints: latency-svc-86dot [2.44797566s]
Oct 8 17:03:42.192: INFO: Created: latency-svc-qgrdm
Oct 8 17:03:42.276: INFO: Got endpoints: latency-svc-8cpdw [2.441427753s]
Oct 8 17:03:42.367: INFO: Created: latency-svc-hq02t
Oct 8 17:03:42.417: INFO: Created: latency-svc-ilcno
Oct 8 17:03:42.499: INFO: Created: latency-svc-dznab
Oct 8 17:03:42.545: INFO: Got endpoints: latency-svc-x8fam [2.074354394s]
Oct 8 17:03:42.721: INFO: Got endpoints: latency-svc-usfcn [2.117261295s]
Oct 8 17:03:42.766: INFO: Created: latency-svc-koinn
Oct 8 17:03:42.766: INFO: Got endpoints: latency-svc-q8nnm [2.114378039s]
Oct 8 17:03:42.864: INFO: Got endpoints: latency-svc-fwlf8 [2.123950343s]
Oct 8 17:03:42.995: INFO: Created: latency-svc-ifmd1
Oct 8 17:03:43.086: INFO: Created: latency-svc-7oqll
Oct 8 17:03:43.129: INFO: Created: latency-svc-n4o41
Oct 8 17:03:43.193: INFO: Got endpoints: latency-svc-dsab2 [2.146400071s]
Oct 8 17:03:43.280: INFO: Got endpoints: latency-svc-lvrmg [2.147331673s]
Oct 8 17:03:43.370: INFO: Got endpoints: latency-svc-8ody0 [2.197068781s]
Oct 8 17:03:43.412: INFO: Got endpoints: latency-svc-84dbb [2.064303962s]
Oct 8 17:03:43.414: INFO: Created: latency-svc-7d0a9
Oct 8 17:03:43.485: INFO: Created: latency-svc-ky3ms
Oct 8 17:03:43.593: INFO: Created: latency-svc-bd7s1
Oct 8 17:03:43.629: INFO: Created: latency-svc-w4l5g
Oct 8 17:03:43.743: INFO: Got endpoints: latency-svc-u1st0 [2.122224326s]
Oct 8 17:03:43.857: INFO: Got endpoints: latency-svc-og0ro [2.099615376s]
Oct 8 17:03:43.895: INFO: Got endpoints: latency-svc-70pnm [2.095471196s]
Oct 8 17:03:43.931: INFO: Created: latency-svc-qh9ms
Oct 8 17:03:44.078: INFO: Created: latency-svc-ox1kv
Oct 8 17:03:44.116: INFO: Created: latency-svc-6ghmd
Oct 8 17:03:44.243: INFO: Got endpoints: latency-svc-qgrdm [2.138029117s]
Oct 8 17:03:44.430: INFO: Got endpoints: latency-svc-hq02t [2.196122294s]
Oct 8 17:03:44.431: INFO: Created: latency-svc-dz372
Oct 8 17:03:44.470: INFO: Got endpoints: latency-svc-ilcno [2.193439126s]
Oct 8 17:03:44.576: INFO: Got endpoints: latency-svc-dznab [2.158413819s]
Oct 8 17:03:44.651: INFO: Created: latency-svc-8fte0
Oct 8 17:03:44.687: INFO: Created: latency-svc-d2mlq
Oct 8 17:03:44.762: INFO: Got endpoints: latency-svc-koinn [2.088542833s]
Oct 8 17:03:44.763: INFO: Created: latency-svc-hfi52
Oct 8 17:03:45.023: INFO: Created: latency-svc-mm28x
Oct 8 17:03:45.059: INFO: Got endpoints: latency-svc-ifmd1 [2.195603788s]
Oct 8 17:03:45.096: INFO: Got endpoints: latency-svc-7oqll [2.18426271s]
Oct 8 17:03:45.171: INFO: Got endpoints: latency-svc-n4o41 [2.176282466s]
Oct 8 17:03:45.277: INFO: Created: latency-svc-9bvg7
Oct 8 17:03:45.351: INFO: Created: latency-svc-llyp9
Oct 8 17:03:45.392: INFO: Created: latency-svc-hi6kv
Oct 8 17:03:45.462: INFO: Got endpoints: latency-svc-7d0a9 [2.13647394s]
Oct 8 17:03:45.620: INFO: Got endpoints: latency-svc-ky3ms [2.212313602s]
Oct 8 17:03:45.665: INFO: Got endpoints: latency-svc-bd7s1 [2.17984414s]
Oct 8 17:03:45.666: INFO: Created: latency-svc-y82l8
Oct 8 17:03:45.766: INFO: Got endpoints: latency-svc-w4l5g [2.244162193s]
Oct 8 17:03:45.840: INFO: Created: latency-svc-zmapo
Oct 8 17:03:45.878: INFO: Created: latency-svc-tkerj
Oct 8 17:03:45.956: INFO: Created: latency-svc-ntia3
Oct 8 17:03:46.044: INFO: Got endpoints: latency-svc-qh9ms [2.186790204s]
Oct 8 17:03:46.149: INFO: Got endpoints: latency-svc-ox1kv [2.184044678s]
Oct 8 17:03:46.230: INFO: Created: latency-svc-y6zu6
Oct 8 17:03:46.231: INFO: Got endpoints: latency-svc-6ghmd [2.228116585s]
Oct 8 17:03:46.338: INFO: Created: latency-svc-jmzjc
Oct 8 17:03:46.373: INFO: Got endpoints: latency-svc-dz372 [2.017221923s]
Oct 8 17:03:46.409: INFO: Created: latency-svc-nozfl
Oct 8 17:03:46.561: INFO: Created: latency-svc-shv7p
Oct 8 17:03:46.693: INFO: Got endpoints: latency-svc-8fte0 [2.152909011s]
Oct 8 17:03:46.767: INFO: Got endpoints: latency-svc-d2mlq [2.191868193s]
Oct 8 17:03:46.806: INFO: Got endpoints: latency-svc-hfi52 [2.118697149s]
Oct 8 17:03:46.883: INFO: Created: latency-svc-jysov
Oct 8 17:03:46.919: INFO: Got endpoints: latency-svc-mm28x [1.992278314s]
Oct 8 17:03:46.990: INFO: Created: latency-svc-m08zy
Oct 8 17:03:47.024: INFO: Created: latency-svc-h6a27
Oct 8 17:03:47.101: INFO: Created: latency-svc-6ohdm
Oct 8 17:03:47.295: INFO: Got endpoints: latency-svc-9bvg7 [2.125257074s]
Oct 8 17:03:47.369: INFO: Got endpoints: latency-svc-llyp9 [2.165109219s]
Oct 8 17:03:47.406: INFO: Got endpoints: latency-svc-hi6kv [2.128914365s]
Oct 8 17:03:47.482: INFO: Created: latency-svc-vw6ao
Oct 8 17:03:47.523: INFO: Got endpoints: latency-svc-y82l8 [1.951391125s]
Oct 8 17:03:47.653: INFO: Created: latency-svc-08i9q
Oct 8 17:03:47.706: INFO: Created: latency-svc-lzzvf
Oct 8 17:03:47.776: INFO: Created: latency-svc-gd7d3
Oct 8 17:03:47.893: INFO: Got endpoints: latency-svc-zmapo [2.162144468s]
Oct 8 17:03:47.969: INFO: Got endpoints: latency-svc-tkerj [2.202837506s]
Oct 8 17:03:48.006: INFO: Got endpoints: latency-svc-ntia3 [2.127653878s]
Oct 8 17:03:48.080: INFO: Created: latency-svc-lypw2
Oct 8 17:03:48.190: INFO: Created: latency-svc-d8gu9
Oct 8 17:03:48.294: INFO: Got endpoints: latency-svc-y6zu6 [2.145314406s]
Oct 8 17:03:48.407: INFO: Got endpoints: latency-svc-jmzjc [2.139277778s]
Oct 8 17:03:48.478: INFO: Got endpoints: latency-svc-nozfl [2.140270561s]
Oct 8 17:03:48.595: INFO: Got endpoints: latency-svc-shv7p [2.107304534s]
Oct 8 17:03:48.945: INFO: Got endpoints: latency-svc-jysov [2.139076404s]
Oct 8 17:03:49.056: INFO: Got endpoints: latency-svc-m08zy [2.173742292s]
Oct 8 17:03:49.095: INFO: Got endpoints: latency-svc-h6a27 [2.177239272s]
Oct 8 17:03:49.166: INFO: Got endpoints: latency-svc-6ohdm [2.142266543s]
Oct 8 17:03:49.594: INFO: Got endpoints: latency-svc-vw6ao [2.191558472s]
Oct 8 17:03:49.744: INFO: Got endpoints: latency-svc-08i9q [2.261489294s]
Oct 8 17:03:49.818: INFO: Got endpoints: latency-svc-lzzvf [2.296173668s]
Oct 8 17:03:49.860: INFO: Got endpoints: latency-svc-gd7d3 [2.157778419s]
Oct 8 17:03:50.243: INFO: Got endpoints: latency-svc-lypw2 [2.239442305s]
Oct 8 17:03:50.322: INFO: Got endpoints: latency-svc-d8gu9 [2.243980868s]
STEP: deleting replication controller svc-latency-rc in namespace e2e-tests-svc-latency-wgxdt
Oct 8 17:03:52.450: INFO: Deleting RC svc-latency-rc took: 2.016130117s
Oct 8 17:04:02.452: INFO: Terminating RC svc-latency-rc pods took: 10.002776823s
Oct 8 17:04:02.452: INFO: Latencies: [110.782837ms 147.829117ms 187.092273ms 230.011046ms 265.806164ms 304.498273ms 345.753609ms 449.572776ms 516.048578ms 523.450264ms 524.811355ms 559.802778ms 570.745901ms 595.102353ms 604.184063ms 645.504873ms 667.87516ms 750.990079ms 853.253208ms 918.513968ms 1.043721046s 1.159222528s 1.199543722s 1.226230382s 1.258876252s 1.369509144s 1.373298344s 1.374991133s 1.441207823s 1.443453499s 1.488792301s 1.525807514s 1.564366427s 1.593071235s 1.634717395s 1.65340308s 1.675087838s 1.712210934s 1.725426199s 1.751557787s 1.761835415s 1.76365983s 1.781638321s 1.785960264s 1.792022627s 1.831763673s 1.839176963s 1.840255145s 1.898013291s 1.92800769s 1.951391125s 1.961712425s 1.989827325s 1.992278314s 2.011854471s 2.017221923s 2.034279889s 2.034814359s 2.054969901s 2.061332307s 2.063695133s 2.064303962s 2.06936879s 2.074354394s 2.088542833s 2.090924188s 2.091811884s 2.092035627s 2.095471196s 2.096167449s 2.097080468s 2.099615376s 2.099954671s 2.107304534s 2.108562255s 2.109454265s 2.112227218s 2.11350232s 2.11415534s 2.114378039s 2.117261295s 2.118697149s 2.122224326s 2.123950343s 2.125257074s 2.127653878s 2.128132756s 2.128914365s 2.131456887s 2.132615111s 2.135012907s 2.135674469s 2.13647394s 2.137627776s 2.138029117s 2.139076404s 2.139277778s 2.140270561s 2.142266543s 2.145314406s 2.146400071s 2.147331673s 2.147487217s 2.149298859s 2.150522741s 2.151229554s 2.152539294s 2.152909011s 2.156683556s 2.15718053s 2.157778419s 2.158413819s 2.161691658s 2.162144468s 2.162408273s 2.165109219s 2.165609949s 2.165793294s 2.173742292s 2.176282466s 2.177239272s 2.177313695s 2.179094151s 2.17984414s 2.182857207s 2.184044678s 2.18426271s 2.186790204s 2.189466122s 2.189743259s 2.189816166s 2.191558472s 2.191868193s 2.192056422s 2.19206393s 2.193315514s 2.193439126s 2.195603788s 2.196122294s 2.197068781s 2.197556571s 2.19796119s 2.200365832s 2.202837506s 2.209195285s 2.212313602s 2.213019006s 2.21373812s 2.21792693s 2.222064936s 2.223495793s 2.225555174s 2.227101349s 2.228116585s 2.233425174s 2.234127807s 2.239442305s 2.242190322s 2.243980868s 2.244162193s 2.245633416s 2.248002536s 2.248467395s 2.261489294s 2.262354081s 2.263311532s 2.266612638s 2.296173668s 2.405906565s 2.441427753s 2.443043161s 2.445401507s 2.446115273s 2.44797566s 2.456272968s 2.488238214s 2.488384018s 2.488984276s 2.489459649s 2.491087526s 2.491279247s 2.514512899s 2.535747784s 2.539651426s 2.539656776s 2.545985156s 2.548422763s 2.549941314s 2.572174278s 2.583485543s 2.6181965s 2.619778489s 2.700429204s 2.712288099s 2.800575499s 2.885622009s 2.95969314s 2.981448101s 2.986481535s 2.99253368s]
Oct 8 17:04:02.452: INFO: 50 %ile: 2.146400071s
Oct 8 17:04:02.452: INFO: 90 %ile: 2.491279247s
Oct 8 17:04:02.452: INFO: 99 %ile: 2.986481535s
Oct 8 17:04:02.452: INFO: Total sample count: 200
[AfterEach] Service endpoints latency
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 17:04:02.453: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 17:04:02.454: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 17:04:02.454: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-svc-latency-wgxdt" for this suite.
• [SLOW TEST:59.738 seconds]
Service endpoints latency
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:117
should not be very high
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:116
------------------------------
EmptyDir volumes
should support (root,0777,default)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:82
[BeforeEach] EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 17:04:07.463: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-q8pcf
Oct 8 17:04:07.466: INFO: Get service account default in ns e2e-tests-emptydir-q8pcf failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 17:04:09.467: INFO: Service account default in ns e2e-tests-emptydir-q8pcf with secrets found. (2.003827506s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 17:04:09.467: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-q8pcf
Oct 8 17:04:09.468: INFO: Service account default in ns e2e-tests-emptydir-q8pcf with secrets found. (1.075245ms)
[It] should support (root,0777,default)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:82
STEP: Creating a pod to test emptydir 0777 on node default medium
Oct 8 17:04:09.471: INFO: Waiting up to 5m0s for pod pod-43c1e2e1-6e19-11e5-bcd2-28d244b00276 status to be success or failure
Oct 8 17:04:09.473: INFO: No Status.Info for container 'test-container' in pod 'pod-43c1e2e1-6e19-11e5-bcd2-28d244b00276' yet
Oct 8 17:04:09.473: INFO: Waiting for pod pod-43c1e2e1-6e19-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-q8pcf' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.507991ms elapsed)
Oct 8 17:04:11.474: INFO: No Status.Info for container 'test-container' in pod 'pod-43c1e2e1-6e19-11e5-bcd2-28d244b00276' yet
Oct 8 17:04:11.474: INFO: Waiting for pod pod-43c1e2e1-6e19-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-q8pcf' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.00306132s elapsed)
Oct 8 17:04:13.477: INFO: No Status.Info for container 'test-container' in pod 'pod-43c1e2e1-6e19-11e5-bcd2-28d244b00276' yet
Oct 8 17:04:13.477: INFO: Waiting for pod pod-43c1e2e1-6e19-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-q8pcf' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.005630608s elapsed)
Oct 8 17:04:15.479: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-43c1e2e1-6e19-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-q8pcf' so far
Oct 8 17:04:15.479: INFO: Waiting for pod pod-43c1e2e1-6e19-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-q8pcf' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.007348116s elapsed)
Oct 8 17:04:17.480: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-43c1e2e1-6e19-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-q8pcf' so far
Oct 8 17:04:17.481: INFO: Waiting for pod pod-43c1e2e1-6e19-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-q8pcf' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.00931173s elapsed)
Oct 8 17:04:19.482: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-43c1e2e1-6e19-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-q8pcf' so far
Oct 8 17:04:19.482: INFO: Waiting for pod pod-43c1e2e1-6e19-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-q8pcf' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.011167661s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-43c1e2e1-6e19-11e5-bcd2-28d244b00276 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rwxrwxrwx
[AfterEach] EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 17:04:21.564: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 17:04:21.566: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 17:04:21.566: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-q8pcf" for this suite.
• [SLOW TEST:19.152 seconds]
EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (root,0777,default)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:82
------------------------------
Pods
should be updated
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:377
[BeforeEach] Pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 17:04:26.614: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-yitq0
Oct 8 17:04:26.615: INFO: Get service account default in ns e2e-tests-pods-yitq0 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 17:04:28.616: INFO: Service account default in ns e2e-tests-pods-yitq0 with secrets found. (2.002210331s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 17:04:28.616: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-yitq0
Oct 8 17:04:28.617: INFO: Service account default in ns e2e-tests-pods-yitq0 with secrets found. (962.737µs)
[It] should be updated
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:377
STEP: creating the pod
STEP: submitting the pod to kubernetes
Oct 8 17:04:28.620: INFO: Waiting up to 5m0s for pod pod-update-4f2bd650-6e19-11e5-bcd2-28d244b00276 status to be running
Oct 8 17:04:28.622: INFO: Waiting for pod pod-update-4f2bd650-6e19-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-pods-yitq0' status to be 'running'(found phase: "Pending", readiness: false) (1.980822ms elapsed)
Oct 8 17:04:30.624: INFO: Waiting for pod pod-update-4f2bd650-6e19-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-pods-yitq0' status to be 'running'(found phase: "Pending", readiness: false) (2.003732309s elapsed)
Oct 8 17:04:32.625: INFO: Waiting for pod pod-update-4f2bd650-6e19-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-pods-yitq0' status to be 'running'(found phase: "Pending", readiness: false) (4.005615427s elapsed)
Oct 8 17:04:34.628: INFO: Waiting for pod pod-update-4f2bd650-6e19-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-pods-yitq0' status to be 'running'(found phase: "Pending", readiness: false) (6.008101672s elapsed)
Oct 8 17:04:36.630: INFO: Waiting for pod pod-update-4f2bd650-6e19-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-pods-yitq0' status to be 'running'(found phase: "Pending", readiness: false) (8.010400117s elapsed)
Oct 8 17:04:38.632: INFO: Waiting for pod pod-update-4f2bd650-6e19-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-pods-yitq0' status to be 'running'(found phase: "Pending", readiness: false) (10.012308325s elapsed)
Oct 8 17:04:40.634: INFO: Found pod 'pod-update-4f2bd650-6e19-11e5-bcd2-28d244b00276' on node '127.0.0.1'
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Oct 8 17:04:41.139: INFO: Conflicting update to pod, re-get and re-update: pods "pod-update-4f2bd650-6e19-11e5-bcd2-28d244b00276" cannot be updated: the object has been modified; please apply your changes to the latest version and try again
STEP: updating the pod
Oct 8 17:04:41.643: INFO: Successfully updated pod
Oct 8 17:04:41.643: INFO: Waiting up to 5m0s for pod pod-update-4f2bd650-6e19-11e5-bcd2-28d244b00276 status to be running
Oct 8 17:04:41.648: INFO: Found pod 'pod-update-4f2bd650-6e19-11e5-bcd2-28d244b00276' on node '127.0.0.1'
STEP: verifying the updated pod is in kubernetes
Oct 8 17:04:41.650: INFO: Pod update OK
STEP: deleting the pod
[AfterEach] Pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 17:04:41.715: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 17:04:41.717: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 17:04:41.717: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-yitq0" for this suite.
• [SLOW TEST:20.158 seconds]
Pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:778
should be updated
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pods.go:377
------------------------------
hostPath
should support r/w
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/host_path.go:103
[BeforeEach] hostPath
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/host_path.go:53
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 17:04:46.771: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-hostpath-ik6lu
Oct 8 17:04:46.773: INFO: Get service account default in ns e2e-tests-hostpath-ik6lu failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 17:04:48.774: INFO: Service account default in ns e2e-tests-hostpath-ik6lu with secrets found. (2.002644702s)
[It] should support r/w
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/host_path.go:103
STEP: Creating a pod to test hostPath r/w
Oct 8 17:04:48.776: INFO: Waiting up to 5m0s for pod pod-host-path-test status to be success or failure
Oct 8 17:04:48.778: INFO: No Status.Info for container 'test-container-1' in pod 'pod-host-path-test' yet
Oct 8 17:04:48.778: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-ik6lu' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.456774ms elapsed)
Oct 8 17:04:50.780: INFO: No Status.Info for container 'test-container-1' in pod 'pod-host-path-test' yet
Oct 8 17:04:50.780: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-ik6lu' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.003165596s elapsed)
Oct 8 17:04:52.782: INFO: No Status.Info for container 'test-container-1' in pod 'pod-host-path-test' yet
Oct 8 17:04:52.782: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-ik6lu' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.005369762s elapsed)
Oct 8 17:04:54.784: INFO: Nil State.Terminated for container 'test-container-1' in pod 'pod-host-path-test' in namespace 'e2e-tests-hostpath-ik6lu' so far
Oct 8 17:04:54.784: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-ik6lu' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.007440367s elapsed)
Oct 8 17:04:56.786: INFO: Nil State.Terminated for container 'test-container-1' in pod 'pod-host-path-test' in namespace 'e2e-tests-hostpath-ik6lu' so far
Oct 8 17:04:56.786: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-ik6lu' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.009549243s elapsed)
Oct 8 17:04:58.788: INFO: Nil State.Terminated for container 'test-container-1' in pod 'pod-host-path-test' in namespace 'e2e-tests-hostpath-ik6lu' so far
Oct 8 17:04:58.788: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-ik6lu' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.011464701s elapsed)
STEP: Saw pod success
Oct 8 17:05:00.790: INFO: Waiting up to 5m0s for pod pod-host-path-test status to be success or failure
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-host-path-test container test-container-2: <nil>
STEP: Successfully fetched pod logs:Error read file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrycontent of file "/test-volume/test-file": mount-tester new file
[AfterEach] hostPath
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/host_path.go:60
STEP: Destroying namespace for this suite e2e-tests-hostpath-ik6lu
• [SLOW TEST:19.151 seconds]
hostPath
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/host_path.go:104
should support r/w
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/host_path.go:103
------------------------------
EmptyDir volumes
should support (non-root,0666,tmpfs)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:62
[BeforeEach] EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 17:05:05.923: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-6d8b1
Oct 8 17:05:05.924: INFO: Get service account default in ns e2e-tests-emptydir-6d8b1 failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 17:05:07.925: INFO: Service account default in ns e2e-tests-emptydir-6d8b1 with secrets found. (2.002323619s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 17:05:07.925: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-6d8b1
Oct 8 17:05:07.926: INFO: Service account default in ns e2e-tests-emptydir-6d8b1 with secrets found. (913.311µs)
[It] should support (non-root,0666,tmpfs)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:62
STEP: Creating a pod to test emptydir 0666 on tmpfs
Oct 8 17:05:07.928: INFO: Waiting up to 5m0s for pod pod-6699d930-6e19-11e5-bcd2-28d244b00276 status to be success or failure
Oct 8 17:05:07.930: INFO: No Status.Info for container 'test-container' in pod 'pod-6699d930-6e19-11e5-bcd2-28d244b00276' yet
Oct 8 17:05:07.930: INFO: Waiting for pod pod-6699d930-6e19-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-6d8b1' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.324305ms elapsed)
Oct 8 17:05:09.933: INFO: No Status.Info for container 'test-container' in pod 'pod-6699d930-6e19-11e5-bcd2-28d244b00276' yet
Oct 8 17:05:09.933: INFO: Waiting for pod pod-6699d930-6e19-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-6d8b1' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.004335651s elapsed)
Oct 8 17:05:11.960: INFO: No Status.Info for container 'test-container' in pod 'pod-6699d930-6e19-11e5-bcd2-28d244b00276' yet
Oct 8 17:05:11.960: INFO: Waiting for pod pod-6699d930-6e19-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-6d8b1' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.03208151s elapsed)
Oct 8 17:05:13.962: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-6699d930-6e19-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-6d8b1' so far
Oct 8 17:05:13.962: INFO: Waiting for pod pod-6699d930-6e19-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-6d8b1' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.033981455s elapsed)
Oct 8 17:05:15.964: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-6699d930-6e19-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-6d8b1' so far
Oct 8 17:05:15.964: INFO: Waiting for pod pod-6699d930-6e19-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-6d8b1' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.035858151s elapsed)
Oct 8 17:05:17.966: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-6699d930-6e19-11e5-bcd2-28d244b00276' in namespace 'e2e-tests-emptydir-6d8b1' so far
Oct 8 17:05:17.966: INFO: Waiting for pod pod-6699d930-6e19-11e5-bcd2-28d244b00276 in namespace 'e2e-tests-emptydir-6d8b1' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.03770467s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 127.0.0.1 pod pod-6699d930-6e19-11e5-bcd2-28d244b00276 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-rw-rw-
[AfterEach] EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Oct 8 17:05:20.026: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 17:05:20.027: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 17:05:20.027: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-6d8b1" for this suite.
• [SLOW TEST:19.146 seconds]
EmptyDir volumes
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (non-root,0666,tmpfs)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:62
------------------------------
ReplicationController
should serve a basic image on each replica with a public image
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/rc.go:38
[BeforeEach] ReplicationController
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/yifan/.kubernetes_auth
STEP: Building a namespace api object
Oct 8 17:05:25.068: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-replication-controller-oylfj
Oct 8 17:05:25.069: INFO: Get service account default in ns e2e-tests-replication-controller-oylfj failed, ignoring for 2s: serviceaccounts "default" not found
Oct 8 17:05:27.070: INFO: Service account default in ns e2e-tests-replication-controller-oylfj with secrets found. (2.002215455s)
STEP: Waiting for a default service account to be provisioned in namespace
Oct 8 17:05:27.071: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-replication-controller-oylfj
Oct 8 17:05:27.071: INFO: Service account default in ns e2e-tests-replication-controller-oylfj with secrets found. (920.493µs)
[It] should serve a basic image on each replica with a public image
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/rc.go:38
STEP: Creating replication controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276
Oct 8 17:05:27.076: INFO: Pod name my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Found 0 pods out of 2
Oct 8 17:05:32.078: INFO: Pod name my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Found 2 pods out of 2
STEP: Ensuring each pod is running
Oct 8 17:05:32.078: INFO: Waiting up to 5m0s for pod my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk status to be running
Oct 8 17:05:32.079: INFO: Waiting for pod my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk in namespace 'e2e-tests-replication-controller-oylfj' status to be 'running'(found phase: "Pending", readiness: false) (1.558862ms elapsed)
Oct 8 17:05:34.081: INFO: Waiting for pod my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk in namespace 'e2e-tests-replication-controller-oylfj' status to be 'running'(found phase: "Pending", readiness: false) (2.003554442s elapsed)
Oct 8 17:05:36.084: INFO: Waiting for pod my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk in namespace 'e2e-tests-replication-controller-oylfj' status to be 'running'(found phase: "Pending", readiness: false) (4.005684417s elapsed)
Oct 8 17:05:38.085: INFO: Found pod 'my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk' on node '127.0.0.1'
Oct 8 17:05:38.085: INFO: Waiting up to 5m0s for pod my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz status to be running
Oct 8 17:05:38.087: INFO: Found pod 'my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz' on node '127.0.0.1'
STEP: Trying to dial each unique pod
Oct 8 17:05:43.091: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 1 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk" but got "rkt-f83def12-9abd-4ee4-9ab0-b1314bd2152b"
Oct 8 17:05:43.094: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 2 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz" but got "rkt-05478757-711d-4846-a322-554e5062ec97"
Oct 8 17:05:48.091: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 1 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk" but got "rkt-f83def12-9abd-4ee4-9ab0-b1314bd2152b"
Oct 8 17:05:48.094: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 2 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz" but got "rkt-05478757-711d-4846-a322-554e5062ec97"
Oct 8 17:05:53.091: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 1 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk" but got "rkt-f83def12-9abd-4ee4-9ab0-b1314bd2152b"
Oct 8 17:05:53.094: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 2 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz" but got "rkt-05478757-711d-4846-a322-554e5062ec97"
Oct 8 17:05:58.091: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 1 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk" but got "rkt-f83def12-9abd-4ee4-9ab0-b1314bd2152b"
Oct 8 17:05:58.093: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 2 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz" but got "rkt-05478757-711d-4846-a322-554e5062ec97"
Oct 8 17:06:03.091: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 1 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk" but got "rkt-f83def12-9abd-4ee4-9ab0-b1314bd2152b"
Oct 8 17:06:03.094: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 2 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz" but got "rkt-05478757-711d-4846-a322-554e5062ec97"
Oct 8 17:06:08.091: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 1 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk" but got "rkt-f83def12-9abd-4ee4-9ab0-b1314bd2152b"
Oct 8 17:06:08.094: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 2 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz" but got "rkt-05478757-711d-4846-a322-554e5062ec97"
Oct 8 17:06:13.091: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 1 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk" but got "rkt-f83def12-9abd-4ee4-9ab0-b1314bd2152b"
Oct 8 17:06:13.094: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 2 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz" but got "rkt-05478757-711d-4846-a322-554e5062ec97"
Oct 8 17:06:18.091: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 1 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk" but got "rkt-f83def12-9abd-4ee4-9ab0-b1314bd2152b"
Oct 8 17:06:18.094: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 2 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz" but got "rkt-05478757-711d-4846-a322-554e5062ec97"
Oct 8 17:06:23.091: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 1 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk" but got "rkt-f83def12-9abd-4ee4-9ab0-b1314bd2152b"
Oct 8 17:06:23.093: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 2 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz" but got "rkt-05478757-711d-4846-a322-554e5062ec97"
Oct 8 17:06:28.091: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 1 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk" but got "rkt-f83def12-9abd-4ee4-9ab0-b1314bd2152b"
Oct 8 17:06:28.093: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 2 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz" but got "rkt-05478757-711d-4846-a322-554e5062ec97"
Oct 8 17:06:33.091: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 1 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk" but got "rkt-f83def12-9abd-4ee4-9ab0-b1314bd2152b"
Oct 8 17:06:33.093: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 2 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz" but got "rkt-05478757-711d-4846-a322-554e5062ec97"
Oct 8 17:06:38.091: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 1 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk" but got "rkt-f83def12-9abd-4ee4-9ab0-b1314bd2152b"
Oct 8 17:06:38.094: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 2 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz" but got "rkt-05478757-711d-4846-a322-554e5062ec97"
Oct 8 17:06:43.091: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 1 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk" but got "rkt-f83def12-9abd-4ee4-9ab0-b1314bd2152b"
Oct 8 17:06:43.093: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 2 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz" but got "rkt-05478757-711d-4846-a322-554e5062ec97"
Oct 8 17:06:48.091: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 1 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk" but got "rkt-f83def12-9abd-4ee4-9ab0-b1314bd2152b"
Oct 8 17:06:48.094: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 2 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz" but got "rkt-05478757-711d-4846-a322-554e5062ec97"
Oct 8 17:06:53.091: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 1 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk" but got "rkt-f83def12-9abd-4ee4-9ab0-b1314bd2152b"
Oct 8 17:06:53.093: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 2 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz" but got "rkt-05478757-711d-4846-a322-554e5062ec97"
Oct 8 17:06:58.092: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 1 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk" but got "rkt-f83def12-9abd-4ee4-9ab0-b1314bd2152b"
Oct 8 17:06:58.094: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 2 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz" but got "rkt-05478757-711d-4846-a322-554e5062ec97"
Oct 8 17:07:03.091: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 1 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk" but got "rkt-f83def12-9abd-4ee4-9ab0-b1314bd2152b"
Oct 8 17:07:03.093: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 2 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz" but got "rkt-05478757-711d-4846-a322-554e5062ec97"
Oct 8 17:07:08.091: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 1 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk" but got "rkt-f83def12-9abd-4ee4-9ab0-b1314bd2152b"
Oct 8 17:07:08.093: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 2 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz" but got "rkt-05478757-711d-4846-a322-554e5062ec97"
Oct 8 17:07:13.091: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 1 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk" but got "rkt-f83def12-9abd-4ee4-9ab0-b1314bd2152b"
Oct 8 17:07:13.093: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 2 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz" but got "rkt-05478757-711d-4846-a322-554e5062ec97"
Oct 8 17:07:18.091: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 1 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk" but got "rkt-f83def12-9abd-4ee4-9ab0-b1314bd2152b"
Oct 8 17:07:18.093: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 2 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz" but got "rkt-05478757-711d-4846-a322-554e5062ec97"
Oct 8 17:07:23.091: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 1 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk" but got "rkt-f83def12-9abd-4ee4-9ab0-b1314bd2152b"
Oct 8 17:07:23.093: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 2 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz" but got "rkt-05478757-711d-4846-a322-554e5062ec97"
Oct 8 17:07:28.091: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 1 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk" but got "rkt-f83def12-9abd-4ee4-9ab0-b1314bd2152b"
Oct 8 17:07:28.093: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 2 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz" but got "rkt-05478757-711d-4846-a322-554e5062ec97"
Oct 8 17:07:33.099: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 1 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk" but got "rkt-f83def12-9abd-4ee4-9ab0-b1314bd2152b"
Oct 8 17:07:33.102: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 2 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz" but got "rkt-05478757-711d-4846-a322-554e5062ec97"
Oct 8 17:07:38.091: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 1 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk" but got "rkt-f83def12-9abd-4ee4-9ab0-b1314bd2152b"
Oct 8 17:07:38.093: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 2 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz" but got "rkt-05478757-711d-4846-a322-554e5062ec97"
Oct 8 17:07:38.097: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 1 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk" but got "rkt-f83def12-9abd-4ee4-9ab0-b1314bd2152b"
Oct 8 17:07:38.099: INFO: Controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: Replica 2 [my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz] expected response "my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz" but got "rkt-05478757-711d-4846-a322-554e5062ec97"
STEP: deleting replication controller my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276 in namespace e2e-tests-replication-controller-oylfj
Oct 8 17:07:40.118: INFO: Deleting RC my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276 took: 2.016228876s
Oct 8 17:07:50.121: INFO: Terminating RC my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276 pods took: 10.003347888s
[AfterEach] ReplicationController
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
STEP: Collecting events from namespace "e2e-tests-replication-controller-oylfj".
Oct 8 17:07:50.128: INFO: event for my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk: {scheduler } Scheduled: Successfully assigned my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk to 127.0.0.1
Oct 8 17:07:50.128: INFO: event for my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/serve_hostname:1.1" already present on machine
Oct 8 17:07:50.128: INFO: event for my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk: {kubelet 127.0.0.1} Created: Created with rkt id f83def12
Oct 8 17:07:50.128: INFO: event for my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk: {kubelet 127.0.0.1} Started: Started with rkt id f83def12
Oct 8 17:07:50.128: INFO: event for my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk: {kubelet 127.0.0.1} Killing: Killing with rkt id f83def12
Oct 8 17:07:50.128: INFO: event for my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz: {scheduler } Scheduled: Successfully assigned my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz to 127.0.0.1
Oct 8 17:07:50.128: INFO: event for my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz: {kubelet 127.0.0.1} Pulled: Container image "gcr.io/google_containers/serve_hostname:1.1" already present on machine
Oct 8 17:07:50.128: INFO: event for my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz: {kubelet 127.0.0.1} Created: Created with rkt id 05478757
Oct 8 17:07:50.128: INFO: event for my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz: {kubelet 127.0.0.1} Started: Started with rkt id 05478757
Oct 8 17:07:50.128: INFO: event for my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz: {kubelet 127.0.0.1} Killing: Killing with rkt id 05478757
Oct 8 17:07:50.128: INFO: event for my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-yaeuz
Oct 8 17:07:50.128: INFO: event for my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276: {replication-controller } SuccessfulCreate: Created pod: my-hostname-basic-72033862-6e19-11e5-bcd2-28d244b00276-443jk
Oct 8 17:07:50.130: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 8 17:07:50.130: INFO:
Oct 8 17:07:50.130: INFO: Waiting up to 1m0s for all nodes to be ready
Oct 8 17:07:50.131: INFO: Node 127.0.0.1 condition 1/1: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2015-10-08 14:54:47 -0700 PDT
Oct 8 17:07:50.132: INFO: Successfully found node 127.0.0.1 readiness to be true
STEP: Destroying namespace "e2e-tests-replication-controller-oylfj" for this suite.
• Failure [150.074 seconds]
ReplicationController
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/rc.go:46
should serve a basic image on each replica with a public image [It]
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/rc.go:38
Oct 8 17:07:38.099: Did not get expected responses within the timeout period of 120.00 seconds.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/rc.go:117
------------------------------
Summarizing 49 Failures:
[Fail] Examples e2e [Example]ClusterDns [It] should create pod that uses dns
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1046
[Panic!] DaemonRestart [It] Kubelet should not restart containers across restart
/usr/local/go/src/runtime/panic.go:387
[Fail] PreStop [It] should call prestop when killing a pod
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:141
[Panic!] Resource usage of system containers [It] should not exceed expected amount.
/usr/local/go/src/runtime/panic.go:387
[Fail] DaemonRestart [It] Controller Manager should not create/delete replicas across restart
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:63
[Fail] Kubectl client Kubectl expose [It] should create services for rc
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1046
[Fail] Kubectl client Simple pod [BeforeEach] should support exec
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1046
[Fail] Job [It] should run a job to completion when tasks sometimes fail and are locally restarted
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:70
[Fail] Deployment [It] deployment should create new pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/deployment.go:72
[Fail] Docker Containers [It] should be able to override the image's default command and arguments
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1139
[Fail] Job [It] should run a job to completion when tasks succeed
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:56
[Fail] Kubectl client Kubectl describe [It] should check if kubectl describe prints relevant information for rc and pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1046
[Fail] Deployment [It] deployment should delete old pods and create new ones
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/deployment.go:136
[Fail] DNS [It] should provide DNS for services
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/dns.go:237
[Fail] Docker Containers [It] should be able to override the image's default arguments (docker cmd)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1139
[Fail] Kubectl client Update Demo [It] should do a rolling update of a replication controller
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1046
[Fail] Variable Expansion [It] should allow substituting values in a container's command
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1139
[Fail] Kubectl client Update Demo [It] should create and stop a replication controller
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1046
[Fail] Kubectl client Kubectl label [BeforeEach] should update the label on a resource
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1046
[Fail] Daemon set [It] should run and stop simple daemon
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:98
[Fail] Networking [It] should provide Internet connection for containers
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:81
[Fail] Kubectl client Kubectl run pod [It] should create a pod from an image when restart is OnFailure
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1046
[Fail] Kubectl client Kubectl logs [BeforeEach] should be able to retrieve and filter logs
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1046
[Fail] DNS [It] should provide DNS for the cluster
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/dns.go:199
[Fail] Port forwarding With a server that expects a client request [It] should support a client that connects, sends data, and disconnects
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:102
[Fail] Port forwarding With a server that expects no client request [It] should support a client that connects, sends no data, and disconnects
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:102
[Fail] Port forwarding With a server that expects a client request [It] should support a client that connects, sends no data, and disconnects
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/portforward.go:102
[Fail] Kubectl client Guestbook application [It] should create and stop a working application
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1046
[Fail] kube-ui [It] should check that the kube-ui instance is alive
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kube-ui.go:44
[Fail] Job [It] should keep restarting failed pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:96
[Fail] Variable Expansion [It] should allow substituting values in a container's args
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1139
[Fail] Kubectl client Proxy server [It] should support proxy with --port 0
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:644
[Fail] Job [It] should scale a job up
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:115
[Fail] Kubectl client Kubectl patch [It] should add annotations for pods in rc
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1046
[Fail] Job [It] should run a job to completion when tasks sometimes fail and are not locally restarted
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:84
[Fail] Kubelet experimental resource usage tracking [It] over 30m0s with 50 pods per node.
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:66
[Fail] Kubectl client Simple pod [BeforeEach] should support inline execution and attach
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1046
[Fail] Kubectl client Update Demo [It] should scale a replication controller
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1046
[Fail] Deployment [It] deployment should scale up and down in the right order
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/deployment.go:217
[Fail] DaemonRestart [It] Scheduler should continue assigning pods to nodes across restart
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:63
[Fail] EmptyDir volumes [It] should support (root,0666,tmpfs)
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1139
[Fail] Kubectl client Proxy server [It] should support --unix-socket=/path
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:673
[Fail] Job [It] should stop a job
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:165
[Fail] Kubectl client Simple pod [BeforeEach] should support port-forward
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1046
[Fail] Job [It] should scale a job down
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/job.go:141
[Fail] ServiceAccounts [It] should mount an API token into pods
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1139
[Fail] Kubectl client Kubectl run pod [It] should create a pod from an image when restart is Never
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1046
[Fail] Daemon set [It] should run and stop complex daemon
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:155
[Fail] ReplicationController [It] should serve a basic image on each replica with a public image
/home/yifan/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/rc.go:117
Ran 109 of 185 Specs in 7246.865 seconds
FAIL! -- 60 Passed | 49 Failed | 2 Pending | 74 Skipped --- FAIL: TestE2E (7246.87s)
FAIL
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment