Skip to content

Instantly share code, notes, and snippets.

@caseydavenport
Created January 12, 2016 22:22
Show Gist options
  • Save caseydavenport/98ca87e709b21f03d195 to your computer and use it in GitHub Desktop.
Save caseydavenport/98ca87e709b21f03d195 to your computer and use it in GitHub Desktop.
Conformance test using current-context of /home/gulfstream/.kube/config
Conformance test run date:Tue Jan 12 11:05:12 PST 2016
Conformance test SHA:f175451d8bbe0319805805241010e68752148314
Conformance test version tag(s):
Conformance test checking conformance with Kubernetes version 1.0
Conformance test: not doing test setup.
I0112 11:05:13.334204 7192 e2e_test.go:100] The --provider flag is not set. Treating as a conformance test. Some tests may not be run.
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
Jan 12 11:05:13.474: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Jan 12 11:05:13.576: INFO: 2 / 2 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Jan 12 11:05:13.576: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready.
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1452625513 - Will randomize all specs
Will run 72 of 175 specs
Networking
should provide Internet connection for containers [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:82
[BeforeEach] Networking
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:05:13.605: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nettest-tgi34
Jan 12 11:05:13.616: INFO: Get service account default in ns e2e-tests-nettest-tgi34 failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:05:15.620: INFO: Service account default in ns e2e-tests-nettest-tgi34 with secrets found. (2.015361913s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:05:15.620: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nettest-tgi34
Jan 12 11:05:15.624: INFO: Service account default in ns e2e-tests-nettest-tgi34 with secrets found. (3.594893ms)
[BeforeEach] Networking
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:52
STEP: Executing a successful http request from the external internet
[It] should provide Internet connection for containers [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:82
STEP: Running container which tries to wget google.com
STEP: Verify that the pod succeed
Jan 12 11:05:15.833: INFO: Waiting up to 5m0s for pod wget-test status to be success or failure
Jan 12 11:05:15.840: INFO: No Status.Info for container 'wget-test-container' in pod 'wget-test' yet
Jan 12 11:05:15.840: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-tgi34' status to be 'success or failure'(found phase: "Pending", readiness: false) (7.001325ms elapsed)
Jan 12 11:05:17.845: INFO: Nil State.Terminated for container 'wget-test-container' in pod 'wget-test' in namespace 'e2e-tests-nettest-tgi34' so far
Jan 12 11:05:17.845: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-tgi34' status to be 'success or failure'(found phase: "Running", readiness: false) (2.01237313s elapsed)
Jan 12 11:05:19.865: INFO: Nil State.Terminated for container 'wget-test-container' in pod 'wget-test' in namespace 'e2e-tests-nettest-tgi34' so far
Jan 12 11:05:19.865: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-tgi34' status to be 'success or failure'(found phase: "Running", readiness: false) (4.032403404s elapsed)
Jan 12 11:05:21.870: INFO: Nil State.Terminated for container 'wget-test-container' in pod 'wget-test' in namespace 'e2e-tests-nettest-tgi34' so far
Jan 12 11:05:21.870: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-tgi34' status to be 'success or failure'(found phase: "Running", readiness: false) (6.036832222s elapsed)
Jan 12 11:05:23.874: INFO: Nil State.Terminated for container 'wget-test-container' in pod 'wget-test' in namespace 'e2e-tests-nettest-tgi34' so far
Jan 12 11:05:23.874: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-tgi34' status to be 'success or failure'(found phase: "Running", readiness: false) (8.041568198s elapsed)
STEP: Saw pod success
[AfterEach] Networking
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:05:25.928: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:05:25.936: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:05:25.937: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:05:25.938: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:05:25.938: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:05:25.939: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:05:25.939: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-nettest-tgi34" for this suite.
• [SLOW TEST:17.385 seconds]
Networking
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:249
should provide Internet connection for containers [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:82
------------------------------
SS
------------------------------
Kubectl client Kubectl run rc
should create an rc from an image [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:815
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
Jan 12 11:05:30.978: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-es3d0
Jan 12 11:05:30.982: INFO: Get service account default in ns e2e-tests-kubectl-es3d0 failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:05:32.985: INFO: Service account default in ns e2e-tests-kubectl-es3d0 with secrets found. (2.006631032s)
[BeforeEach] Kubectl run rc
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:783
[It] should create an rc from an image [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:815
STEP: running the image nginx
Jan 12 11:05:32.985: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config run e2e-test-nginx-rc --image=nginx --namespace=e2e-tests-kubectl-es3d0'
Jan 12 11:05:33.243: INFO: replicationcontroller "e2e-test-nginx-rc" created
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
[AfterEach] Kubectl run rc
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:787
Jan 12 11:05:33.254: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config stop rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-es3d0'
Jan 12 11:05:35.617: INFO: replicationcontroller "e2e-test-nginx-rc" deleted
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-es3d0
• [SLOW TEST:9.671 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:922
Kubectl run rc
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:817
should create an rc from an image [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:815
------------------------------
S
------------------------------
Docker Containers
should be able to override the image's default command and arguments [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:83
[BeforeEach] Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:41
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
Jan 12 11:05:40.649: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-containers-lyel5
Jan 12 11:05:40.656: INFO: Get service account default in ns e2e-tests-containers-lyel5 failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:05:42.660: INFO: Service account default in ns e2e-tests-containers-lyel5 with secrets found. (2.011012792s)
[It] should be able to override the image's default command and arguments [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:83
STEP: Creating a pod to test override all
Jan 12 11:05:42.668: INFO: Waiting up to 5m0s for pod client-containers-7a1fd470-b95f-11e5-ba19-000c29facd78 status to be success or failure
Jan 12 11:05:42.676: INFO: No Status.Info for container 'test-container' in pod 'client-containers-7a1fd470-b95f-11e5-ba19-000c29facd78' yet
Jan 12 11:05:42.676: INFO: Waiting for pod client-containers-7a1fd470-b95f-11e5-ba19-000c29facd78 in namespace 'e2e-tests-containers-lyel5' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.080855ms elapsed)
Jan 12 11:05:44.697: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-7a1fd470-b95f-11e5-ba19-000c29facd78' in namespace 'e2e-tests-containers-lyel5' so far
Jan 12 11:05:44.697: INFO: Waiting for pod client-containers-7a1fd470-b95f-11e5-ba19-000c29facd78 in namespace 'e2e-tests-containers-lyel5' status to be 'success or failure'(found phase: "Running", readiness: false) (2.029199352s elapsed)
Jan 12 11:05:46.751: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-7a1fd470-b95f-11e5-ba19-000c29facd78' in namespace 'e2e-tests-containers-lyel5' so far
Jan 12 11:05:46.751: INFO: Waiting for pod client-containers-7a1fd470-b95f-11e5-ba19-000c29facd78 in namespace 'e2e-tests-containers-lyel5' status to be 'success or failure'(found phase: "Running", readiness: false) (4.083004791s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 172.24.114.32 pod client-containers-7a1fd470-b95f-11e5-ba19-000c29facd78 container test-container: <nil>
STEP: Successfully fetched pod logs:[/ep-2 override arguments]
[AfterEach] Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:47
• [SLOW TEST:13.367 seconds]
Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:84
should be able to override the image's default command and arguments [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:83
------------------------------
S
------------------------------
Downward API
should provide pod name and namespace as env vars [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downward_api.go:74
[BeforeEach] Downward API
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:05:54.015: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-downward-api-5i86t
Jan 12 11:05:54.020: INFO: Get service account default in ns e2e-tests-downward-api-5i86t failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:05:56.025: INFO: Service account default in ns e2e-tests-downward-api-5i86t with secrets found. (2.009817174s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:05:56.025: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-downward-api-5i86t
Jan 12 11:05:56.027: INFO: Service account default in ns e2e-tests-downward-api-5i86t with secrets found. (2.320862ms)
[It] should provide pod name and namespace as env vars [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downward_api.go:74
STEP: Creating a pod to test downward api env vars
Jan 12 11:05:56.038: INFO: Waiting up to 5m0s for pod downward-api-82177ff8-b95f-11e5-ba19-000c29facd78 status to be success or failure
Jan 12 11:05:56.044: INFO: No Status.Info for container 'dapi-container' in pod 'downward-api-82177ff8-b95f-11e5-ba19-000c29facd78' yet
Jan 12 11:05:56.045: INFO: Waiting for pod downward-api-82177ff8-b95f-11e5-ba19-000c29facd78 in namespace 'e2e-tests-downward-api-5i86t' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.632052ms elapsed)
Jan 12 11:05:58.049: INFO: Nil State.Terminated for container 'dapi-container' in pod 'downward-api-82177ff8-b95f-11e5-ba19-000c29facd78' in namespace 'e2e-tests-downward-api-5i86t' so far
Jan 12 11:05:58.049: INFO: Waiting for pod downward-api-82177ff8-b95f-11e5-ba19-000c29facd78 in namespace 'e2e-tests-downward-api-5i86t' status to be 'success or failure'(found phase: "Running", readiness: false) (2.011438229s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 172.24.114.32 pod downward-api-82177ff8-b95f-11e5-ba19-000c29facd78 container dapi-container: <nil>
STEP: Successfully fetched pod logs:KUBERNETES_PORT=tcp://10.100.0.1:443
KUBERNETES_SERVICE_PORT=443
HOSTNAME=downward-api-82177ff8-b95f-11e5-ba19-000c29facd78
SHLVL=1
HOME=/root
KUBERNETES_PORT_443_TCP_ADDR=10.100.0.1
POD_NAME=downward-api-82177ff8-b95f-11e5-ba19-000c29facd78
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP=tcp://10.100.0.1:443
POD_NAMESPACE=e2e-tests-downward-api-5i86t
PWD=/
KUBERNETES_SERVICE_HOST=10.100.0.1
[AfterEach] Downward API
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:06:00.105: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:06:00.118: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:06:00.119: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:06:00.119: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:06:00.119: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:06:00.119: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:06:00.119: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-downward-api-5i86t" for this suite.
• [SLOW TEST:11.168 seconds]
Downward API
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downward_api.go:75
should provide pod name and namespace as env vars [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downward_api.go:74
------------------------------
SS
------------------------------
Probing container
with readiness probe should not be ready before initial delay and never restart [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:74
[BeforeEach] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:39
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:06:05.185: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-container-probe-x2f81
Jan 12 11:06:05.192: INFO: Get service account default in ns e2e-tests-container-probe-x2f81 failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:06:07.197: INFO: Service account default in ns e2e-tests-container-probe-x2f81 with secrets found. (2.012594988s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:06:07.197: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-container-probe-x2f81
Jan 12 11:06:07.200: INFO: Service account default in ns e2e-tests-container-probe-x2f81 with secrets found. (2.444465ms)
[It] with readiness probe should not be ready before initial delay and never restart [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:74
Jan 12 11:06:09.215: INFO: pod is not yet ready; pod has phase "Pending".
Jan 12 11:06:11.215: INFO: pod is not yet ready; pod has phase "Pending".
Jan 12 11:06:13.217: INFO: pod is not yet ready; pod has phase "Pending".
Jan 12 11:06:15.214: INFO: pod is not yet ready; pod has phase "Pending".
Jan 12 11:06:17.230: INFO: pod is not yet ready; pod has phase "Pending".
Jan 12 11:06:19.215: INFO: pod is not yet ready; pod has phase "Pending".
Jan 12 11:06:21.218: INFO: pod is not yet ready; pod has phase "Pending".
Jan 12 11:06:23.214: INFO: pod is not yet ready; pod has phase "Pending".
Jan 12 11:06:25.214: INFO: pod is not yet ready; pod has phase "Pending".
Jan 12 11:06:27.215: INFO: pod is not yet ready; pod has phase "Pending".
Jan 12 11:06:29.215: INFO: pod is not yet ready; pod has phase "Pending".
Jan 12 11:06:31.225: INFO: pod is not yet ready; pod has phase "Pending".
Jan 12 11:06:33.221: INFO: pod is not yet ready; pod has phase "Pending".
Jan 12 11:06:35.214: INFO: pod is not yet ready; pod has phase "Running".
Jan 12 11:06:37.214: INFO: pod is not yet ready; pod has phase "Running".
Jan 12 11:06:39.215: INFO: pod is not yet ready; pod has phase "Running".
Jan 12 11:06:41.215: INFO: pod is not yet ready; pod has phase "Running".
Jan 12 11:06:43.215: INFO: pod is not yet ready; pod has phase "Running".
Jan 12 11:06:45.214: INFO: pod is not yet ready; pod has phase "Running".
Jan 12 11:06:47.213: INFO: pod is not yet ready; pod has phase "Running".
Jan 12 11:06:49.255: INFO: pod is not yet ready; pod has phase "Running".
Jan 12 11:06:51.230: INFO: pod is not yet ready; pod has phase "Running".
Jan 12 11:06:53.232: INFO: pod is not yet ready; pod has phase "Running".
Jan 12 11:06:55.233: INFO: pod is not yet ready; pod has phase "Running".
Jan 12 11:06:57.214: INFO: pod is not yet ready; pod has phase "Running".
Jan 12 11:06:59.215: INFO: pod is not yet ready; pod has phase "Running".
Jan 12 11:07:01.217: INFO: pod is not yet ready; pod has phase "Running".
Jan 12 11:07:03.216: INFO: pod is not yet ready; pod has phase "Running".
Jan 12 11:07:05.215: INFO: pod is not yet ready; pod has phase "Running".
Jan 12 11:07:07.215: INFO: pod is not yet ready; pod has phase "Running".
Jan 12 11:07:09.224: INFO: pod is not yet ready; pod has phase "Running".
Jan 12 11:07:11.223: INFO: pod is not yet ready; pod has phase "Running".
[AfterEach] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:41
Jan 12 11:07:13.221: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:07:13.227: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:07:13.227: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:07:13.227: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:07:13.227: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:07:13.227: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:07:13.227: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-container-probe-x2f81" for this suite.
• [SLOW TEST:73.078 seconds]
Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:101
with readiness probe should not be ready before initial delay and never restart [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:74
------------------------------
EmptyDir volumes
volume on tmpfs should have the correct mode [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:42
[BeforeEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:07:18.290: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-r1lhi
Jan 12 11:07:18.309: INFO: Get service account default in ns e2e-tests-emptydir-r1lhi failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:07:20.313: INFO: Service account default in ns e2e-tests-emptydir-r1lhi with secrets found. (2.022249481s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:07:20.313: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-r1lhi
Jan 12 11:07:20.316: INFO: Service account default in ns e2e-tests-emptydir-r1lhi with secrets found. (3.33777ms)
[It] volume on tmpfs should have the correct mode [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:42
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan 12 11:07:20.324: INFO: Waiting up to 5m0s for pod pod-b454fd2e-b95f-11e5-ba19-000c29facd78 status to be success or failure
Jan 12 11:07:20.331: INFO: No Status.Info for container 'test-container' in pod 'pod-b454fd2e-b95f-11e5-ba19-000c29facd78' yet
Jan 12 11:07:20.331: INFO: Waiting for pod pod-b454fd2e-b95f-11e5-ba19-000c29facd78 in namespace 'e2e-tests-emptydir-r1lhi' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.887442ms elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 172.24.114.32 pod pod-b454fd2e-b95f-11e5-ba19-000c29facd78 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs
perms of file "/test-volume": -rwxrwxrwx
[AfterEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:07:22.371: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:07:22.385: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:07:22.385: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:07:22.385: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:07:22.385: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:07:22.385: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:07:22.385: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-r1lhi" for this suite.
• [SLOW TEST:9.163 seconds]
EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
volume on tmpfs should have the correct mode [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:42
------------------------------
SS
------------------------------
Examples e2e [Example]ClusterDns
should create pod that uses dns [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:551
[BeforeEach] Examples e2e
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:61
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
Jan 12 11:07:27.424: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-examples-b5216
Jan 12 11:07:27.432: INFO: Get service account default in ns e2e-tests-examples-b5216 failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:07:29.436: INFO: Service account default in ns e2e-tests-examples-b5216 with secrets found. (2.012307387s)
[It] should create pod that uses dns [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:551
Jan 12 11:07:29.446: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-dnsexample0-uwo96
Jan 12 11:07:29.453: INFO: Service account default in ns e2e-tests-dnsexample0-uwo96 had 0 secrets, ignoring for 2s: <nil>
Jan 12 11:07:31.457: INFO: Service account default in ns e2e-tests-dnsexample0-uwo96 with secrets found. (2.010682545s)
Jan 12 11:07:31.467: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-dnsexample1-gr88t
Jan 12 11:07:31.481: INFO: Get service account default in ns e2e-tests-dnsexample1-gr88t failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:07:33.484: INFO: Service account default in ns e2e-tests-dnsexample1-gr88t with secrets found. (2.017704897s)
Jan 12 11:07:33.485: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config create -f /home/gulfstream/repos/kubernetes/examples/cluster-dns/dns-backend-rc.yaml --namespace=e2e-tests-dnsexample0-uwo96'
Jan 12 11:07:33.747: INFO: replicationcontroller "dns-backend" created
Jan 12 11:07:33.747: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config create -f /home/gulfstream/repos/kubernetes/examples/cluster-dns/dns-backend-rc.yaml --namespace=e2e-tests-dnsexample1-gr88t'
Jan 12 11:07:34.031: INFO: replicationcontroller "dns-backend" created
Jan 12 11:07:34.031: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config create -f /home/gulfstream/repos/kubernetes/examples/cluster-dns/dns-backend-service.yaml --namespace=e2e-tests-dnsexample0-uwo96'
Jan 12 11:07:34.385: INFO: service "dns-backend" created
Jan 12 11:07:34.385: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config create -f /home/gulfstream/repos/kubernetes/examples/cluster-dns/dns-backend-service.yaml --namespace=e2e-tests-dnsexample1-gr88t'
Jan 12 11:07:34.726: INFO: service "dns-backend" created
Jan 12 11:07:39.757: INFO: Service dns-backend in namespace e2e-tests-dnsexample0-uwo96 found.
Jan 12 11:07:44.803: INFO: Service dns-backend in namespace e2e-tests-dnsexample1-gr88t found.
STEP: trying to dial each unique pod
Jan 12 11:07:44.823: INFO: Controller dns-backend: Got non-empty result from replica 1 [dns-backend-sm3ul]: "Hello World!", 1 of 1 required successes so far
Jan 12 11:07:44.823: INFO: found 1 backend pods responding in namespace e2e-tests-dnsexample0-uwo96
STEP: trying to dial the service e2e-tests-dnsexample0-uwo96.dns-backend via the proxy
Jan 12 11:07:44.837: INFO: Service dns-backend: found nonempty answer: Hello World!
STEP: trying to dial each unique pod
Jan 12 11:07:44.858: INFO: Controller dns-backend: Got non-empty result from replica 1 [dns-backend-zplgx]: "Hello World!", 1 of 1 required successes so far
Jan 12 11:07:44.858: INFO: found 1 backend pods responding in namespace e2e-tests-dnsexample1-gr88t
STEP: trying to dial the service e2e-tests-dnsexample1-gr88t.dns-backend via the proxy
Jan 12 11:07:44.867: INFO: Service dns-backend: found nonempty answer: Hello World!
Jan 12 11:07:44.870: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config exec dns-backend-sm3ul --namespace=e2e-tests-dnsexample0-uwo96 -- python -c
import socket
try:
socket.gethostbyname('dns-backend.e2e-tests-dnsexample0-uwo96')
print 'ok'
except:
print 'err''
Jan 12 11:07:45.440: INFO: ok
Jan 12 11:07:45.440: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config create -f - --namespace=e2e-tests-dnsexample0-uwo96'
Jan 12 11:07:45.706: INFO: pod "dns-frontend" created
Jan 12 11:07:45.707: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config create -f - --namespace=e2e-tests-dnsexample1-gr88t'
Jan 12 11:07:46.082: INFO: pod "dns-frontend" created
Jan 12 11:07:46.082: INFO: Waiting up to 5m0s for pod dns-frontend status to be !pending
Jan 12 11:07:46.089: INFO: Waiting for pod dns-frontend in namespace 'e2e-tests-dnsexample0-uwo96' status to be '!pending'(found phase: "Pending", readiness: false) (6.399435ms elapsed)
Jan 12 11:07:48.102: INFO: Saw pod 'dns-frontend' in namespace 'e2e-tests-dnsexample0-uwo96' out of pending state (found '"Running"')
Jan 12 11:07:48.102: INFO: Waiting up to 5m0s for pod dns-frontend status to be !pending
Jan 12 11:07:48.105: INFO: Waiting for pod dns-frontend in namespace 'e2e-tests-dnsexample1-gr88t' status to be '!pending'(found phase: "Pending", readiness: false) (3.399381ms elapsed)
Jan 12 11:07:50.110: INFO: Saw pod 'dns-frontend' in namespace 'e2e-tests-dnsexample1-gr88t' out of pending state (found '"Running"')
Jan 12 11:07:50.110: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config log dns-frontend dns-frontend --namespace=e2e-tests-dnsexample0-uwo96'
Jan 12 11:07:50.381: INFO: 10.100.0.158
Send request to: http://dns-backend.e2e-tests-dnsexample0-uwo96.cluster.local:8000
<Response [200]>
Hello World!
Jan 12 11:07:50.381: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config log dns-frontend dns-frontend --namespace=e2e-tests-dnsexample1-gr88t'
Jan 12 11:07:50.612: INFO: 10.100.0.158
Send request to: http://dns-backend.e2e-tests-dnsexample0-uwo96.cluster.local:8000
<Response [200]>
Hello World!
[AfterEach] Examples e2e
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:68
STEP: Destroying namespace for this suite e2e-tests-examples-b5216
• [SLOW TEST:38.312 seconds]
Examples e2e
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:553
[Example]ClusterDns
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:552
should create pod that uses dns [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:551
------------------------------
S
------------------------------
Kubectl client Proxy server
should support proxy with --port 0 [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:892
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
Jan 12 11:08:05.735: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-ylm5l
Jan 12 11:08:05.741: INFO: Get service account default in ns e2e-tests-kubectl-ylm5l failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:08:07.745: INFO: Service account default in ns e2e-tests-kubectl-ylm5l with secrets found. (2.00971181s)
[It] should support proxy with --port 0 [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:892
STEP: starting the proxy server
Jan 12 11:08:07.745: INFO: Asynchronously running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config proxy -p 0'
STEP: curling proxy /api/ output
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-ylm5l
• [SLOW TEST:7.345 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:922
Proxy server
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:921
should support proxy with --port 0 [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:892
------------------------------
SS
------------------------------
Service endpoints latency
should not be very high [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:116
[BeforeEach] Service endpoints latency
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:08:13.101: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-svc-latency-ktg0p
Jan 12 11:08:13.108: INFO: Get service account default in ns e2e-tests-svc-latency-ktg0p failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:08:15.112: INFO: Service account default in ns e2e-tests-svc-latency-ktg0p with secrets found. (2.010977238s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:08:15.112: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-svc-latency-ktg0p
Jan 12 11:08:15.114: INFO: Service account default in ns e2e-tests-svc-latency-ktg0p with secrets found. (2.208341ms)
[It] should not be very high [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:116
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-ktg0p
Jan 12 11:08:15.124: INFO: Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-ktg0p, replica count: 1
Jan 12 11:08:16.125: INFO: 2016-01-12 11:08:16.125106321 -0800 PST svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown
Jan 12 11:08:17.125: INFO: 2016-01-12 11:08:17.125804156 -0800 PST svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown
Jan 12 11:08:18.126: INFO: 2016-01-12 11:08:18.126874241 -0800 PST svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown
Jan 12 11:08:19.127: INFO: 2016-01-12 11:08:19.12721645 -0800 PST svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown
Jan 12 11:08:19.263: INFO: Created: latency-svc-zavq1
Jan 12 11:08:19.298: INFO: Got endpoints: latency-svc-zavq1 [70.929542ms]
Jan 12 11:08:19.904: INFO: Created: latency-svc-fahlz
Jan 12 11:08:19.998: INFO: Created: latency-svc-h1uf4
Jan 12 11:08:20.000: INFO: Created: latency-svc-qfmtt
Jan 12 11:08:20.062: INFO: Created: latency-svc-3him6
Jan 12 11:08:20.091: INFO: Created: latency-svc-mutmo
Jan 12 11:08:20.132: INFO: Got endpoints: latency-svc-fahlz [712.568599ms]
Jan 12 11:08:20.142: INFO: Got endpoints: latency-svc-qfmtt [724.054088ms]
Jan 12 11:08:20.146: INFO: Got endpoints: latency-svc-h1uf4 [730.383139ms]
Jan 12 11:08:20.250: INFO: Created: latency-svc-wckec
Jan 12 11:08:20.253: INFO: Got endpoints: latency-svc-3him6 [834.313428ms]
Jan 12 11:08:20.302: INFO: Got endpoints: latency-svc-mutmo [883.782287ms]
Jan 12 11:08:20.330: INFO: Created: latency-svc-y93xn
Jan 12 11:08:20.403: INFO: Created: latency-svc-2g8oj
Jan 12 11:08:20.559: INFO: Created: latency-svc-q7t7i
Jan 12 11:08:20.570: INFO: Got endpoints: latency-svc-wckec [1.151268228s]
Jan 12 11:08:20.625: INFO: Created: latency-svc-mxkro
Jan 12 11:08:20.700: INFO: Created: latency-svc-2m2i9
Jan 12 11:08:20.702: INFO: Got endpoints: latency-svc-y93xn [1.284292286s]
Jan 12 11:08:20.728: INFO: Got endpoints: latency-svc-2g8oj [1.310177667s]
Jan 12 11:08:20.799: INFO: Created: latency-svc-xsz3r
Jan 12 11:08:20.835: INFO: Got endpoints: latency-svc-q7t7i [1.417770797s]
Jan 12 11:08:20.870: INFO: Got endpoints: latency-svc-mxkro [1.451583865s]
Jan 12 11:08:20.957: INFO: Created: latency-svc-gv58q
Jan 12 11:08:20.966: INFO: Got endpoints: latency-svc-2m2i9 [1.550321255s]
Jan 12 11:08:21.141: INFO: Got endpoints: latency-svc-xsz3r [1.724473919s]
Jan 12 11:08:21.319: INFO: Created: latency-svc-iy4t0
Jan 12 11:08:21.320: INFO: Created: latency-svc-v2ym4
Jan 12 11:08:21.473: INFO: Got endpoints: latency-svc-gv58q [2.055828074s]
Jan 12 11:08:21.584: INFO: Got endpoints: latency-svc-v2ym4 [2.167086181s]
Jan 12 11:08:21.654: INFO: Got endpoints: latency-svc-iy4t0 [2.238613943s]
Jan 12 11:08:22.019: INFO: Created: latency-svc-rkfwk
Jan 12 11:08:22.080: INFO: Got endpoints: latency-svc-rkfwk [664.547558ms]
Jan 12 11:08:22.287: INFO: Created: latency-svc-xd5dm
Jan 12 11:08:22.348: INFO: Created: latency-svc-dtn37
Jan 12 11:08:22.357: INFO: Got endpoints: latency-svc-xd5dm [778.374245ms]
Jan 12 11:08:22.423: INFO: Got endpoints: latency-svc-dtn37 [737.307792ms]
Jan 12 11:08:22.516: INFO: Created: latency-svc-53u2p
Jan 12 11:08:22.567: INFO: Created: latency-svc-wjhkg
Jan 12 11:08:22.626: INFO: Got endpoints: latency-svc-53u2p [840.131976ms]
Jan 12 11:08:22.697: INFO: Created: latency-svc-9cxfu
Jan 12 11:08:22.700: INFO: Got endpoints: latency-svc-wjhkg [819.376224ms]
Jan 12 11:08:22.732: INFO: Created: latency-svc-4izhf
Jan 12 11:08:22.832: INFO: Created: latency-svc-jpcea
Jan 12 11:08:22.866: INFO: Got endpoints: latency-svc-9cxfu [959.957671ms]
Jan 12 11:08:22.934: INFO: Got endpoints: latency-svc-4izhf [1.018392531s]
Jan 12 11:08:22.939: INFO: Created: latency-svc-a0ea4
Jan 12 11:08:22.979: INFO: Created: latency-svc-spkkb
Jan 12 11:08:23.023: INFO: Got endpoints: latency-svc-jpcea [1.092302264s]
Jan 12 11:08:23.097: INFO: Created: latency-svc-9puv7
Jan 12 11:08:23.143: INFO: Got endpoints: latency-svc-a0ea4 [1.194770354s]
Jan 12 11:08:23.411: INFO: Got endpoints: latency-svc-spkkb [1.379933981s]
Jan 12 11:08:23.453: INFO: Created: latency-svc-1emx9
Jan 12 11:08:23.534: INFO: Got endpoints: latency-svc-9puv7 [1.463938584s]
Jan 12 11:08:23.597: INFO: Got endpoints: latency-svc-1emx9 [1.503327495s]
Jan 12 11:08:23.678: INFO: Created: latency-svc-c1oc7
Jan 12 11:08:23.783: INFO: Created: latency-svc-hldxv
Jan 12 11:08:23.793: INFO: Got endpoints: latency-svc-c1oc7 [1.394018841s]
Jan 12 11:08:23.873: INFO: Got endpoints: latency-svc-hldxv [1.283427351s]
Jan 12 11:08:24.059: INFO: Created: latency-svc-k9mj6
Jan 12 11:08:24.110: INFO: Got endpoints: latency-svc-k9mj6 [1.741356624s]
Jan 12 11:08:24.182: INFO: Created: latency-svc-0q57u
Jan 12 11:08:24.320: INFO: Created: latency-svc-8dlsp
Jan 12 11:08:24.362: INFO: Got endpoints: latency-svc-0q57u [833.394915ms]
Jan 12 11:08:24.399: INFO: Created: latency-svc-os9w5
Jan 12 11:08:24.508: INFO: Created: latency-svc-wust5
Jan 12 11:08:24.529: INFO: Got endpoints: latency-svc-8dlsp [836.96365ms]
Jan 12 11:08:24.600: INFO: Got endpoints: latency-svc-os9w5 [814.241379ms]
Jan 12 11:08:24.616: INFO: Got endpoints: latency-svc-wust5 [760.081349ms]
Jan 12 11:08:24.709: INFO: Created: latency-svc-ze1ow
Jan 12 11:08:24.833: INFO: Created: latency-svc-p6lr4
Jan 12 11:08:24.839: INFO: Got endpoints: latency-svc-ze1ow [937.741491ms]
Jan 12 11:08:24.865: INFO: Created: latency-svc-mbl11
Jan 12 11:08:24.939: INFO: Created: latency-svc-i36s5
Jan 12 11:08:24.959: INFO: Got endpoints: latency-svc-p6lr4 [933.444824ms]
Jan 12 11:08:24.996: INFO: Got endpoints: latency-svc-mbl11 [1.513079884s]
Jan 12 11:08:25.040: INFO: Got endpoints: latency-svc-i36s5 [951.456425ms]
Jan 12 11:08:25.217: INFO: Created: latency-svc-j0jbc
Jan 12 11:08:25.230: INFO: Created: latency-svc-lh00d
Jan 12 11:08:25.315: INFO: Created: latency-svc-znrcu
Jan 12 11:08:25.349: INFO: Created: latency-svc-phk9n
Jan 12 11:08:25.359: INFO: Got endpoints: latency-svc-lh00d [1.08673198s]
Jan 12 11:08:25.368: INFO: Created: latency-svc-33jbf
Jan 12 11:08:25.473: INFO: Created: latency-svc-rc07a
Jan 12 11:08:25.479: INFO: Got endpoints: latency-svc-j0jbc [1.247332103s]
Jan 12 11:08:25.555: INFO: Created: latency-svc-on1jr
Jan 12 11:08:25.657: INFO: Created: latency-svc-icq7u
Jan 12 11:08:25.721: INFO: Created: latency-svc-35cfl
Jan 12 11:08:25.754: INFO: Got endpoints: latency-svc-znrcu [1.472260286s]
Jan 12 11:08:25.835: INFO: Got endpoints: latency-svc-phk9n [1.277040897s]
Jan 12 11:08:25.894: INFO: Got endpoints: latency-svc-33jbf [1.909006438s]
Jan 12 11:08:25.929: INFO: Created: latency-svc-b0daq
Jan 12 11:08:25.967: INFO: Got endpoints: latency-svc-rc07a [1.251627592s]
Jan 12 11:08:26.188: INFO: Created: latency-svc-c40oo
Jan 12 11:08:26.259: INFO: Created: latency-svc-emrv8
Jan 12 11:08:26.321: INFO: Created: latency-svc-5ca8i
Jan 12 11:08:26.365: INFO: Created: latency-svc-6zh9a
Jan 12 11:08:26.366: INFO: Got endpoints: latency-svc-on1jr [2.254053069s]
Jan 12 11:08:26.390: INFO: Created: latency-svc-543se
Jan 12 11:08:26.454: INFO: Created: latency-svc-n058s
Jan 12 11:08:26.479: INFO: Got endpoints: latency-svc-icq7u [1.495852746s]
Jan 12 11:08:26.561: INFO: Got endpoints: latency-svc-35cfl [1.442469226s]
Jan 12 11:08:26.632: INFO: Created: latency-svc-jx043
Jan 12 11:08:26.652: INFO: Got endpoints: latency-svc-b0daq [1.164095068s]
Jan 12 11:08:26.792: INFO: Created: latency-svc-geto5
Jan 12 11:08:26.835: INFO: Created: latency-svc-zmnca
Jan 12 11:08:26.974: INFO: Created: latency-svc-ddw8v
Jan 12 11:08:27.040: INFO: Created: latency-svc-1rwfm
Jan 12 11:08:27.085: INFO: Got endpoints: latency-svc-c40oo [1.431787729s]
Jan 12 11:08:27.280: INFO: Got endpoints: latency-svc-5ca8i [1.504752194s]
Jan 12 11:08:27.289: INFO: Got endpoints: latency-svc-emrv8 [1.56875046s]
Jan 12 11:08:27.317: INFO: Created: latency-svc-6yfbl
Jan 12 11:08:27.321: INFO: Got endpoints: latency-svc-6zh9a [1.813379346s]
Jan 12 11:08:27.437: INFO: Created: latency-svc-3v8gw
Jan 12 11:08:27.474: INFO: Got endpoints: latency-svc-543se [1.646126417s]
Jan 12 11:08:27.512: INFO: Created: latency-svc-egvyo
Jan 12 11:08:27.609: INFO: Created: latency-svc-bk5p7
Jan 12 11:08:27.777: INFO: Created: latency-svc-brpd4
Jan 12 11:08:27.783: INFO: Got endpoints: latency-svc-n058s [1.824261684s]
Jan 12 11:08:27.796: INFO: Got endpoints: latency-svc-jx043 [1.701900376s]
Jan 12 11:08:28.006: INFO: Created: latency-svc-7ymjs
Jan 12 11:08:28.046: INFO: Got endpoints: latency-svc-geto5 [2.285399353s]
Jan 12 11:08:28.124: INFO: Created: latency-svc-wi8u2
Jan 12 11:08:28.170: INFO: Created: latency-svc-pk86z
Jan 12 11:08:28.218: INFO: Got endpoints: latency-svc-zmnca [1.998242484s]
Jan 12 11:08:28.241: INFO: Created: latency-svc-2n6px
Jan 12 11:08:28.258: INFO: Created: latency-svc-bfx09
Jan 12 11:08:28.273: INFO: Got endpoints: latency-svc-ddw8v [1.800077792s]
Jan 12 11:08:28.363: INFO: Created: latency-svc-kfdpi
Jan 12 11:08:28.404: INFO: Got endpoints: latency-svc-1rwfm [1.756182825s]
Jan 12 11:08:28.416: INFO: Created: latency-svc-n7b2n
Jan 12 11:08:28.539: INFO: Created: latency-svc-gkj0d
Jan 12 11:08:28.590: INFO: Created: latency-svc-weo9v
Jan 12 11:08:28.613: INFO: Created: latency-svc-ngbu1
Jan 12 11:08:28.623: INFO: Got endpoints: latency-svc-6yfbl [1.60967707s]
Jan 12 11:08:28.755: INFO: Got endpoints: latency-svc-3v8gw [1.7191197s]
Jan 12 11:08:28.766: INFO: Created: latency-svc-xtgjz
Jan 12 11:08:28.866: INFO: Got endpoints: latency-svc-egvyo [1.797164251s]
Jan 12 11:08:28.915: INFO: Created: latency-svc-r8xus
Jan 12 11:08:29.017: INFO: Created: latency-svc-vte4z
Jan 12 11:08:29.182: INFO: Got endpoints: latency-svc-bk5p7 [2.086797897s]
Jan 12 11:08:29.338: INFO: Created: latency-svc-lwrmh
Jan 12 11:08:29.423: INFO: Got endpoints: latency-svc-brpd4 [1.8080199s]
Jan 12 11:08:29.559: INFO: Created: latency-svc-ugqrg
Jan 12 11:08:29.671: INFO: Got endpoints: latency-svc-7ymjs [1.977289578s]
Jan 12 11:08:29.793: INFO: Created: latency-svc-wk1yy
Jan 12 11:08:29.825: INFO: Got endpoints: latency-svc-wi8u2 [1.996602105s]
Jan 12 11:08:29.925: INFO: Got endpoints: latency-svc-pk86z [2.020250296s]
Jan 12 11:08:29.967: INFO: Created: latency-svc-jm3v9
Jan 12 11:08:30.053: INFO: Got endpoints: latency-svc-2n6px [2.258885463s]
Jan 12 11:08:30.183: INFO: Created: latency-svc-jasq0
Jan 12 11:08:30.234: INFO: Got endpoints: latency-svc-bfx09 [2.197933176s]
Jan 12 11:08:30.252: INFO: Created: latency-svc-7nzvg
Jan 12 11:08:30.291: INFO: Got endpoints: latency-svc-kfdpi [2.11500764s]
Jan 12 11:08:30.583: INFO: Created: latency-svc-ai2ue
Jan 12 11:08:30.621: INFO: Created: latency-svc-oc6sf
Jan 12 11:08:30.637: INFO: Got endpoints: latency-svc-n7b2n [2.379301916s]
Jan 12 11:08:30.784: INFO: Created: latency-svc-d8ech
Jan 12 11:08:30.786: INFO: Got endpoints: latency-svc-gkj0d [2.34491172s]
Jan 12 11:08:30.834: INFO: Got endpoints: latency-svc-weo9v [2.380929303s]
Jan 12 11:08:30.906: INFO: Got endpoints: latency-svc-ngbu1 [2.370127348s]
Jan 12 11:08:31.050: INFO: Created: latency-svc-c2gkt
Jan 12 11:08:31.131: INFO: Created: latency-svc-ie0lf
Jan 12 11:08:31.162: INFO: Created: latency-svc-46x06
Jan 12 11:08:31.237: INFO: Got endpoints: latency-svc-xtgjz [2.510242781s]
Jan 12 11:08:31.354: INFO: Got endpoints: latency-svc-r8xus [2.529659729s]
Jan 12 11:08:31.363: INFO: Created: latency-svc-axc9u
Jan 12 11:08:31.470: INFO: Got endpoints: latency-svc-vte4z [2.485291622s]
Jan 12 11:08:31.650: INFO: Created: latency-svc-p5t67
Jan 12 11:08:31.711: INFO: Created: latency-svc-8ax9w
Jan 12 11:08:31.729: INFO: Got endpoints: latency-svc-lwrmh [2.429929111s]
Jan 12 11:08:31.982: INFO: Got endpoints: latency-svc-ugqrg [2.467089555s]
Jan 12 11:08:32.007: INFO: Created: latency-svc-pdj9r
Jan 12 11:08:32.130: INFO: Got endpoints: latency-svc-wk1yy [2.408081204s]
Jan 12 11:08:32.196: INFO: Got endpoints: latency-svc-jm3v9 [2.310923184s]
Jan 12 11:08:32.262: INFO: Created: latency-svc-u3lje
Jan 12 11:08:32.403: INFO: Created: latency-svc-7vh7c
Jan 12 11:08:32.461: INFO: Created: latency-svc-blgpj
Jan 12 11:08:32.567: INFO: Got endpoints: latency-svc-jasq0 [2.473282991s]
Jan 12 11:08:32.639: INFO: Got endpoints: latency-svc-7nzvg [2.454330345s]
Jan 12 11:08:32.732: INFO: Created: latency-svc-faxnd
Jan 12 11:08:32.778: INFO: Created: latency-svc-k1227
Jan 12 11:08:32.833: INFO: Got endpoints: latency-svc-ai2ue [2.485858752s]
Jan 12 11:08:33.007: INFO: Got endpoints: latency-svc-oc6sf [2.468708533s]
Jan 12 11:08:33.028: INFO: Created: latency-svc-5c82m
Jan 12 11:08:33.146: INFO: Created: latency-svc-5k00e
Jan 12 11:08:33.176: INFO: Got endpoints: latency-svc-d8ech [2.444993533s]
Jan 12 11:08:33.306: INFO: Created: latency-svc-ni9hm
Jan 12 11:08:33.557: INFO: Got endpoints: latency-svc-c2gkt [2.631971071s]
Jan 12 11:08:33.665: INFO: Got endpoints: latency-svc-ie0lf [2.659577579s]
Jan 12 11:08:33.688: INFO: Got endpoints: latency-svc-46x06 [2.551385494s]
Jan 12 11:08:33.851: INFO: Got endpoints: latency-svc-axc9u [2.522281588s]
Jan 12 11:08:33.880: INFO: Created: latency-svc-dys0h
Jan 12 11:08:33.997: INFO: Created: latency-svc-7zwn9
Jan 12 11:08:34.083: INFO: Created: latency-svc-3dhnd
Jan 12 11:08:34.122: INFO: Created: latency-svc-vmt0m
Jan 12 11:08:34.240: INFO: Got endpoints: latency-svc-p5t67 [2.74287448s]
Jan 12 11:08:34.302: INFO: Got endpoints: latency-svc-8ax9w [2.738148272s]
Jan 12 11:08:34.394: INFO: Created: latency-svc-ucrmn
Jan 12 11:08:34.459: INFO: Got endpoints: latency-svc-pdj9r [2.65025072s]
Jan 12 11:08:34.490: INFO: Created: latency-svc-miqn5
Jan 12 11:08:34.615: INFO: Created: latency-svc-uet8s
Jan 12 11:08:34.865: INFO: Got endpoints: latency-svc-u3lje [2.730519101s]
Jan 12 11:08:34.994: INFO: Created: latency-svc-7jqck
Jan 12 11:08:35.150: INFO: Got endpoints: latency-svc-7vh7c [2.841981374s]
Jan 12 11:08:35.253: INFO: Got endpoints: latency-svc-blgpj [2.850603756s]
Jan 12 11:08:35.308: INFO: Created: latency-svc-xk4sv
Jan 12 11:08:35.444: INFO: Created: latency-svc-d37r7
Jan 12 11:08:35.817: INFO: Got endpoints: latency-svc-faxnd [3.199313864s]
Jan 12 11:08:35.909: INFO: Got endpoints: latency-svc-k1227 [3.15325284s]
Jan 12 11:08:36.003: INFO: Created: latency-svc-2jr45
Jan 12 11:08:36.052: INFO: Got endpoints: latency-svc-5c82m [3.097347789s]
Jan 12 11:08:36.144: INFO: Created: latency-svc-kxddz
Jan 12 11:08:36.161: INFO: Got endpoints: latency-svc-dys0h [2.438567186s]
Jan 12 11:08:36.170: INFO: Got endpoints: latency-svc-ni9hm [2.911332652s]
Jan 12 11:08:36.324: INFO: Got endpoints: latency-svc-7zwn9 [2.5220066s]
Jan 12 11:08:36.375: INFO: Got endpoints: latency-svc-vmt0m [2.286078644s]
Jan 12 11:08:36.442: INFO: Created: latency-svc-ob7st
Jan 12 11:08:36.503: INFO: Got endpoints: latency-svc-3dhnd [2.580284825s]
Jan 12 11:08:36.573: INFO: Got endpoints: latency-svc-5k00e [3.449590653s]
Jan 12 11:08:36.658: INFO: Got endpoints: latency-svc-ucrmn [2.342038621s]
Jan 12 11:08:36.719: INFO: Got endpoints: latency-svc-miqn5 [2.277801332s]
Jan 12 11:08:36.732: INFO: Got endpoints: latency-svc-uet8s [2.187234849s]
Jan 12 11:08:37.045: INFO: Got endpoints: latency-svc-7jqck [2.079321821s]
Jan 12 11:08:37.079: INFO: Got endpoints: latency-svc-xk4sv [1.875425077s]
Jan 12 11:08:37.190: INFO: Got endpoints: latency-svc-d37r7 [1.785910466s]
Jan 12 11:08:37.257: INFO: Created: latency-svc-v87ku
Jan 12 11:08:37.267: INFO: Got endpoints: latency-svc-kxddz [1.268850209s]
Jan 12 11:08:37.356: INFO: Got endpoints: latency-svc-2jr45 [1.400509494s]
Jan 12 11:08:37.471: INFO: Created: latency-svc-7fvq9
Jan 12 11:08:37.587: INFO: Got endpoints: latency-svc-ob7st [1.35785825s]
Jan 12 11:08:37.661: INFO: Got endpoints: latency-svc-v87ku [1.039505255s]
Jan 12 11:08:37.814: INFO: Got endpoints: latency-svc-7fvq9 [1.082916088s]
Jan 12 11:08:37.867: INFO: Created: latency-svc-vpuqc
Jan 12 11:08:38.088: INFO: Created: latency-svc-sdwn4
Jan 12 11:08:38.096: INFO: Got endpoints: latency-svc-vpuqc [1.08899261s]
Jan 12 11:08:38.153: INFO: Created: latency-svc-15fq5
Jan 12 11:08:38.229: INFO: Got endpoints: latency-svc-sdwn4 [804.829222ms]
Jan 12 11:08:38.277: INFO: Created: latency-svc-3z0gk
Jan 12 11:08:38.320: INFO: Created: latency-svc-2v3xj
Jan 12 11:08:38.395: INFO: Created: latency-svc-pmpg5
Jan 12 11:08:38.483: INFO: Created: latency-svc-t04bm
Jan 12 11:08:38.511: INFO: Created: latency-svc-8566s
Jan 12 11:08:38.521: INFO: Created: latency-svc-wprbg
Jan 12 11:08:38.550: INFO: Got endpoints: latency-svc-15fq5 [1.001934708s]
Jan 12 11:08:38.633: INFO: Created: latency-svc-tysm5
Jan 12 11:08:38.651: INFO: Got endpoints: latency-svc-3z0gk [486.529622ms]
Jan 12 11:08:38.688: INFO: Created: latency-svc-vf0z4
Jan 12 11:08:38.754: INFO: Created: latency-svc-ssk2q
Jan 12 11:08:38.827: INFO: Created: latency-svc-entj8
Jan 12 11:08:38.847: INFO: Got endpoints: latency-svc-2v3xj [1.137728548s]
Jan 12 11:08:38.861: INFO: Created: latency-svc-a9fp8
Jan 12 11:08:38.894: INFO: Created: latency-svc-vjljj
Jan 12 11:08:38.914: INFO: Created: latency-svc-wisy4
Jan 12 11:08:38.987: INFO: Got endpoints: latency-svc-pmpg5 [1.040378806s]
Jan 12 11:08:39.053: INFO: Got endpoints: latency-svc-8566s [1.031037014s]
Jan 12 11:08:39.074: INFO: Created: latency-svc-0m4c6
Jan 12 11:08:39.083: INFO: Got endpoints: latency-svc-t04bm [1.117083352s]
Jan 12 11:08:39.115: INFO: Created: latency-svc-z9nhv
Jan 12 11:08:39.174: INFO: Got endpoints: latency-svc-wprbg [1.07128459s]
Jan 12 11:08:39.211: INFO: Got endpoints: latency-svc-tysm5 [1.571017274s]
Jan 12 11:08:39.233: INFO: Created: latency-svc-zuzsq
Jan 12 11:08:39.251: INFO: Created: latency-svc-gqynk
Jan 12 11:08:39.282: INFO: Created: latency-svc-rghh8
Jan 12 11:08:39.319: INFO: Created: latency-svc-l5kue
Jan 12 11:08:39.357: INFO: Created: latency-svc-l6h8q
Jan 12 11:08:39.376: INFO: Created: latency-svc-z6b7s
Jan 12 11:08:39.389: INFO: Created: latency-svc-774ma
Jan 12 11:08:39.799: INFO: Got endpoints: latency-svc-gqynk [663.082953ms]
Jan 12 11:08:39.826: INFO: Got endpoints: latency-svc-0m4c6 [1.050883566s]
Jan 12 11:08:39.931: INFO: Got endpoints: latency-svc-entj8 [1.603779492s]
Jan 12 11:08:40.201: INFO: Got endpoints: latency-svc-a9fp8 [1.779894119s]
Jan 12 11:08:40.291: INFO: Got endpoints: latency-svc-774ma [947.445444ms]
Jan 12 11:08:40.430: INFO: Created: latency-svc-q78ki
Jan 12 11:08:40.434: INFO: Got endpoints: latency-svc-l6h8q [1.194983521s]
Jan 12 11:08:40.516: INFO: Created: latency-svc-56qsv
Jan 12 11:08:40.551: INFO: Got endpoints: latency-svc-l5kue [1.356684623s]
Jan 12 11:08:40.644: INFO: Got endpoints: latency-svc-rghh8 [1.495939685s]
Jan 12 11:08:40.663: INFO: Got endpoints: latency-svc-ssk2q [2.41324674s]
Jan 12 11:08:40.704: INFO: Got endpoints: latency-svc-vf0z4 [2.503711708s]
Jan 12 11:08:40.910: INFO: Created: latency-svc-llwqz
Jan 12 11:08:41.038: INFO: Created: latency-svc-wqafa
Jan 12 11:08:41.062: INFO: Got endpoints: latency-svc-vjljj [2.619257563s]
Jan 12 11:08:41.098: INFO: Got endpoints: latency-svc-wisy4 [2.577691209s]
Jan 12 11:08:41.184: INFO: Got endpoints: latency-svc-z6b7s [1.895227061s]
Jan 12 11:08:41.185: INFO: Created: latency-svc-u0pnl
Jan 12 11:08:41.364: INFO: Got endpoints: latency-svc-z9nhv [2.450915814s]
Jan 12 11:08:41.399: INFO: Got endpoints: latency-svc-zuzsq [2.334547053s]
Jan 12 11:08:41.512: INFO: Created: latency-svc-oav5e
Jan 12 11:08:41.646: INFO: Created: latency-svc-56jzf
Jan 12 11:08:41.814: INFO: Created: latency-svc-q3f30
Jan 12 11:08:41.877: INFO: Created: latency-svc-y3r7e
Jan 12 11:08:41.919: INFO: Created: latency-svc-9i46o
Jan 12 11:08:41.978: INFO: Created: latency-svc-l861d
Jan 12 11:08:42.002: INFO: Created: latency-svc-9txln
Jan 12 11:08:42.029: INFO: Created: latency-svc-62kpw
Jan 12 11:08:42.065: INFO: Created: latency-svc-yftlh
Jan 12 11:08:42.128: INFO: Created: latency-svc-bt8yw
Jan 12 11:08:42.573: INFO: Got endpoints: latency-svc-q78ki [2.613400936s]
Jan 12 11:08:42.770: INFO: Got endpoints: latency-svc-56qsv [2.46248318s]
Jan 12 11:08:42.856: INFO: Created: latency-svc-bc7sr
Jan 12 11:08:42.960: INFO: Created: latency-svc-b9iw3
Jan 12 11:08:43.049: INFO: Got endpoints: latency-svc-llwqz [2.449391449s]
Jan 12 11:08:43.179: INFO: Created: latency-svc-zt97d
Jan 12 11:08:43.236: INFO: Got endpoints: latency-svc-wqafa [2.579097805s]
Jan 12 11:08:43.310: INFO: Got endpoints: latency-svc-u0pnl [2.545299208s]
Jan 12 11:08:43.471: INFO: Created: latency-svc-qau38
Jan 12 11:08:43.499: INFO: Created: latency-svc-qyiga
Jan 12 11:08:43.738: INFO: Got endpoints: latency-svc-oav5e [2.828864863s]
Jan 12 11:08:43.914: INFO: Created: latency-svc-f4vlo
Jan 12 11:08:43.948: INFO: Got endpoints: latency-svc-56jzf [2.894246946s]
Jan 12 11:08:44.006: INFO: Got endpoints: latency-svc-q3f30 [2.47367108s]
Jan 12 11:08:44.132: INFO: Got endpoints: latency-svc-y3r7e [2.541110144s]
Jan 12 11:08:44.185: INFO: Created: latency-svc-brkvp
Jan 12 11:08:44.240: INFO: Got endpoints: latency-svc-9i46o [2.92453395s]
Jan 12 11:08:44.315: INFO: Created: latency-svc-wgaa1
Jan 12 11:08:44.536: INFO: Got endpoints: latency-svc-l861d [2.868371974s]
Jan 12 11:08:44.573: INFO: Created: latency-svc-6crle
Jan 12 11:08:44.597: INFO: Got endpoints: latency-svc-9txln [2.91580599s]
Jan 12 11:08:44.723: INFO: Created: latency-svc-7loka
Jan 12 11:08:44.728: INFO: Got endpoints: latency-svc-62kpw [3.025321141s]
Jan 12 11:08:44.758: INFO: Got endpoints: latency-svc-yftlh [2.935010278s]
Jan 12 11:08:44.912: INFO: Got endpoints: latency-svc-bt8yw [3.029417573s]
Jan 12 11:08:44.974: INFO: Created: latency-svc-p5emt
Jan 12 11:08:45.054: INFO: Created: latency-svc-axtkt
Jan 12 11:08:45.213: INFO: Got endpoints: latency-svc-bc7sr [2.415497213s]
Jan 12 11:08:45.247: INFO: Created: latency-svc-tkumr
Jan 12 11:08:45.309: INFO: Created: latency-svc-816c2
Jan 12 11:08:45.381: INFO: Got endpoints: latency-svc-b9iw3 [2.471900958s]
Jan 12 11:08:45.385: INFO: Created: latency-svc-u7b5g
Jan 12 11:08:45.430: INFO: Got endpoints: latency-svc-zt97d [2.289374828s]
Jan 12 11:08:45.476: INFO: Created: latency-svc-8khla
Jan 12 11:08:45.601: INFO: Created: latency-svc-hlrlo
Jan 12 11:08:45.622: INFO: Created: latency-svc-t20wf
Jan 12 11:08:45.780: INFO: Got endpoints: latency-svc-qau38 [2.415084894s]
Jan 12 11:08:45.891: INFO: Got endpoints: latency-svc-qyiga [2.415035115s]
Jan 12 11:08:45.905: INFO: Created: latency-svc-qugd4
Jan 12 11:08:45.959: INFO: Got endpoints: latency-svc-f4vlo [2.119585002s]
Jan 12 11:08:46.151: INFO: Created: latency-svc-11xvh
Jan 12 11:08:46.190: INFO: Created: latency-svc-ofew7
Jan 12 11:08:46.356: INFO: Got endpoints: latency-svc-brkvp [2.278919477s]
Jan 12 11:08:46.432: INFO: Got endpoints: latency-svc-wgaa1 [2.235929992s]
Jan 12 11:08:46.491: INFO: Created: latency-svc-flr1z
Jan 12 11:08:46.597: INFO: Created: latency-svc-o7vgs
Jan 12 11:08:46.636: INFO: Got endpoints: latency-svc-6crle [2.26991153s]
Jan 12 11:08:46.774: INFO: Created: latency-svc-qcol2
Jan 12 11:08:46.893: INFO: Got endpoints: latency-svc-7loka [2.311740102s]
Jan 12 11:08:46.986: INFO: Created: latency-svc-5tg0k
Jan 12 11:08:47.147: INFO: Got endpoints: latency-svc-p5emt [2.400718048s]
Jan 12 11:08:47.289: INFO: Created: latency-svc-iqxdr
Jan 12 11:08:47.337: INFO: Got endpoints: latency-svc-axtkt [2.521385012s]
Jan 12 11:08:47.461: INFO: Created: latency-svc-65ulp
Jan 12 11:08:47.535: INFO: Got endpoints: latency-svc-tkumr [2.426306953s]
Jan 12 11:08:47.728: INFO: Got endpoints: latency-svc-816c2 [2.521005594s]
Jan 12 11:08:47.760: INFO: Created: latency-svc-xe914
Jan 12 11:08:47.814: INFO: Got endpoints: latency-svc-u7b5g [2.568748615s]
Jan 12 11:08:47.884: INFO: Created: latency-svc-b2ate
Jan 12 11:08:47.969: INFO: Created: latency-svc-4ehkb
Jan 12 11:08:47.986: INFO: Got endpoints: latency-svc-8khla [2.596772081s]
Jan 12 11:08:48.131: INFO: Created: latency-svc-hxdtj
Jan 12 11:08:48.279: INFO: Got endpoints: latency-svc-hlrlo [2.748073136s]
Jan 12 11:08:48.416: INFO: Got endpoints: latency-svc-t20wf [2.857679095s]
Jan 12 11:08:48.444: INFO: Created: latency-svc-0gtno
Jan 12 11:08:48.549: INFO: Created: latency-svc-8abqs
Jan 12 11:08:48.670: INFO: Got endpoints: latency-svc-qugd4 [2.827048966s]
Jan 12 11:08:48.802: INFO: Created: latency-svc-vhsld
Jan 12 11:08:49.357: INFO: Got endpoints: latency-svc-11xvh [3.3117329s]
Jan 12 11:08:49.430: INFO: Got endpoints: latency-svc-ofew7 [3.267766904s]
Jan 12 11:08:49.443: INFO: Created: latency-svc-onvxj
Jan 12 11:08:49.544: INFO: Got endpoints: latency-svc-flr1z [3.115979784s]
Jan 12 11:08:49.577: INFO: Created: latency-svc-dq1jn
Jan 12 11:08:49.723: INFO: Created: latency-svc-d70cw
Jan 12 11:08:49.789: INFO: Got endpoints: latency-svc-o7vgs [3.227728867s]
Jan 12 11:08:49.881: INFO: Created: latency-svc-7cdkk
Jan 12 11:08:49.986: INFO: Got endpoints: latency-svc-qcol2 [3.255161347s]
Jan 12 11:08:50.122: INFO: Created: latency-svc-asmt8
Jan 12 11:08:50.193: INFO: Got endpoints: latency-svc-5tg0k [3.239496547s]
Jan 12 11:08:50.347: INFO: Created: latency-svc-4ddqe
Jan 12 11:08:50.493: INFO: Got endpoints: latency-svc-iqxdr [3.240588944s]
Jan 12 11:08:50.584: INFO: Got endpoints: latency-svc-65ulp [3.16678619s]
Jan 12 11:08:50.728: INFO: Created: latency-svc-y9zka
Jan 12 11:08:50.771: INFO: Created: latency-svc-qqxkq
Jan 12 11:08:50.831: INFO: Got endpoints: latency-svc-xe914 [3.110042294s]
Jan 12 11:08:50.980: INFO: Created: latency-svc-qvrj8
Jan 12 11:08:51.015: INFO: Got endpoints: latency-svc-b2ate [3.173223659s]
Jan 12 11:08:51.117: INFO: Got endpoints: latency-svc-4ehkb [3.190193029s]
Jan 12 11:08:51.203: INFO: Created: latency-svc-nmu1w
Jan 12 11:08:51.294: INFO: Got endpoints: latency-svc-hxdtj [3.203836388s]
Jan 12 11:08:51.553: INFO: Got endpoints: latency-svc-0gtno [3.156643548s]
Jan 12 11:08:51.744: INFO: Got endpoints: latency-svc-8abqs [3.240501417s]
Jan 12 11:08:51.941: INFO: Got endpoints: latency-svc-vhsld [3.216892368s]
Jan 12 11:08:52.140: INFO: Got endpoints: latency-svc-onvxj [2.736252927s]
Jan 12 11:08:52.332: INFO: Got endpoints: latency-svc-dq1jn [2.79647021s]
Jan 12 11:08:52.524: INFO: Got endpoints: latency-svc-d70cw [2.880572746s]
Jan 12 11:08:52.721: INFO: Got endpoints: latency-svc-7cdkk [2.880707806s]
Jan 12 11:08:52.921: INFO: Got endpoints: latency-svc-asmt8 [2.829519031s]
Jan 12 11:08:53.106: INFO: Got endpoints: latency-svc-4ddqe [2.789749779s]
Jan 12 11:08:53.404: INFO: Got endpoints: latency-svc-y9zka [2.775087233s]
Jan 12 11:08:53.506: INFO: Got endpoints: latency-svc-qqxkq [2.774324946s]
Jan 12 11:08:53.633: INFO: Got endpoints: latency-svc-qvrj8 [2.696414194s]
Jan 12 11:08:54.047: INFO: Got endpoints: latency-svc-nmu1w [2.919222491s]
STEP: deleting replication controller svc-latency-rc in namespace e2e-tests-svc-latency-ktg0p
Jan 12 11:08:56.407: INFO: Deleting RC svc-latency-rc took: 2.214254016s
Jan 12 11:09:02.481: INFO: Terminating RC svc-latency-rc pods took: 6.07373718s
Jan 12 11:09:02.481: INFO: Latencies: [486.529622ms 663.082953ms 664.547558ms 712.568599ms 724.054088ms 730.383139ms 737.307792ms 760.081349ms 778.374245ms 804.829222ms 814.241379ms 819.376224ms 833.394915ms 834.313428ms 836.96365ms 840.131976ms 883.782287ms 933.444824ms 937.741491ms 947.445444ms 951.456425ms 959.957671ms 1.001934708s 1.018392531s 1.031037014s 1.039505255s 1.040378806s 1.050883566s 1.07128459s 1.082916088s 1.08673198s 1.08899261s 1.092302264s 1.117083352s 1.137728548s 1.151268228s 1.164095068s 1.194770354s 1.194983521s 1.247332103s 1.251627592s 1.268850209s 1.277040897s 1.283427351s 1.284292286s 1.310177667s 1.356684623s 1.35785825s 1.379933981s 1.394018841s 1.400509494s 1.417770797s 1.431787729s 1.442469226s 1.451583865s 1.463938584s 1.472260286s 1.495852746s 1.495939685s 1.503327495s 1.504752194s 1.513079884s 1.550321255s 1.56875046s 1.571017274s 1.603779492s 1.60967707s 1.646126417s 1.701900376s 1.7191197s 1.724473919s 1.741356624s 1.756182825s 1.779894119s 1.785910466s 1.797164251s 1.800077792s 1.8080199s 1.813379346s 1.824261684s 1.875425077s 1.895227061s 1.909006438s 1.977289578s 1.996602105s 1.998242484s 2.020250296s 2.055828074s 2.079321821s 2.086797897s 2.11500764s 2.119585002s 2.167086181s 2.187234849s 2.197933176s 2.235929992s 2.238613943s 2.254053069s 2.258885463s 2.26991153s 2.277801332s 2.278919477s 2.285399353s 2.286078644s 2.289374828s 2.310923184s 2.311740102s 2.334547053s 2.342038621s 2.34491172s 2.370127348s 2.379301916s 2.380929303s 2.400718048s 2.408081204s 2.41324674s 2.415035115s 2.415084894s 2.415497213s 2.426306953s 2.429929111s 2.438567186s 2.444993533s 2.449391449s 2.450915814s 2.454330345s 2.46248318s 2.467089555s 2.468708533s 2.471900958s 2.473282991s 2.47367108s 2.485291622s 2.485858752s 2.503711708s 2.510242781s 2.521005594s 2.521385012s 2.5220066s 2.522281588s 2.529659729s 2.541110144s 2.545299208s 2.551385494s 2.568748615s 2.577691209s 2.579097805s 2.580284825s 2.596772081s 2.613400936s 2.619257563s 2.631971071s 2.65025072s 2.659577579s 2.696414194s 2.730519101s 2.736252927s 2.738148272s 2.74287448s 2.748073136s 2.774324946s 2.775087233s 2.789749779s 2.79647021s 2.827048966s 2.828864863s 2.829519031s 2.841981374s 2.850603756s 2.857679095s 2.868371974s 2.880572746s 2.880707806s 2.894246946s 2.911332652s 2.91580599s 2.919222491s 2.92453395s 2.935010278s 3.025321141s 3.029417573s 3.097347789s 3.110042294s 3.115979784s 3.15325284s 3.156643548s 3.16678619s 3.173223659s 3.190193029s 3.199313864s 3.203836388s 3.216892368s 3.227728867s 3.239496547s 3.240501417s 3.240588944s 3.255161347s 3.267766904s 3.3117329s 3.449590653s]
Jan 12 11:09:02.481: INFO: 50 %ile: 2.277801332s
Jan 12 11:09:02.481: INFO: 90 %ile: 3.029417573s
Jan 12 11:09:02.481: INFO: 99 %ile: 3.3117329s
Jan 12 11:09:02.481: INFO: Total sample count: 200
[AfterEach] Service endpoints latency
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:09:02.481: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:09:02.501: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:09:02.501: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:09:02.501: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:09:02.501: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:09:02.501: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:09:02.501: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-svc-latency-ktg0p" for this suite.
• [SLOW TEST:54.449 seconds]
Service endpoints latency
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:117
should not be very high [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:116
------------------------------
EmptyDir volumes
should support (root,0666,default) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:78
[BeforeEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:09:07.535: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-ma7y2
Jan 12 11:09:07.539: INFO: Get service account default in ns e2e-tests-emptydir-ma7y2 failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:09:09.543: INFO: Service account default in ns e2e-tests-emptydir-ma7y2 with secrets found. (2.008229667s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:09:09.543: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-ma7y2
Jan 12 11:09:09.546: INFO: Service account default in ns e2e-tests-emptydir-ma7y2 with secrets found. (2.704964ms)
[It] should support (root,0666,default) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:78
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 12 11:09:09.555: INFO: Waiting up to 5m0s for pod pod-f5701f6d-b95f-11e5-ba19-000c29facd78 status to be success or failure
Jan 12 11:09:09.561: INFO: No Status.Info for container 'test-container' in pod 'pod-f5701f6d-b95f-11e5-ba19-000c29facd78' yet
Jan 12 11:09:09.561: INFO: Waiting for pod pod-f5701f6d-b95f-11e5-ba19-000c29facd78 in namespace 'e2e-tests-emptydir-ma7y2' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.482168ms elapsed)
Jan 12 11:09:11.566: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-f5701f6d-b95f-11e5-ba19-000c29facd78' in namespace 'e2e-tests-emptydir-ma7y2' so far
Jan 12 11:09:11.566: INFO: Waiting for pod pod-f5701f6d-b95f-11e5-ba19-000c29facd78 in namespace 'e2e-tests-emptydir-ma7y2' status to be 'success or failure'(found phase: "Running", readiness: false) (2.011067628s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 172.24.114.32 pod pod-f5701f6d-b95f-11e5-ba19-000c29facd78 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-rw-rw-
[AfterEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:09:13.633: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:09:13.641: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:09:13.641: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:09:13.641: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:09:13.641: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:09:13.641: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:09:13.641: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-ma7y2" for this suite.
• [SLOW TEST:11.200 seconds]
EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (root,0666,default) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:78
------------------------------
Pods
should be schedule with cpu and memory limits [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:183
[BeforeEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:09:18.746: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-ztoh0
Jan 12 11:09:18.759: INFO: Service account default in ns e2e-tests-pods-ztoh0 had 0 secrets, ignoring for 2s: <nil>
Jan 12 11:09:20.762: INFO: Service account default in ns e2e-tests-pods-ztoh0 with secrets found. (2.016111365s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:09:20.762: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-ztoh0
Jan 12 11:09:20.764: INFO: Service account default in ns e2e-tests-pods-ztoh0 with secrets found. (2.516909ms)
[It] should be schedule with cpu and memory limits [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:183
STEP: creating the pod
Jan 12 11:09:20.801: INFO: Waiting up to 5m0s for pod pod-update-fc1feaeb-b95f-11e5-ba19-000c29facd78 status to be running
Jan 12 11:09:20.844: INFO: Waiting for pod pod-update-fc1feaeb-b95f-11e5-ba19-000c29facd78 in namespace 'e2e-tests-pods-ztoh0' status to be 'running'(found phase: "Pending", readiness: false) (43.103131ms elapsed)
Jan 12 11:09:22.848: INFO: Found pod 'pod-update-fc1feaeb-b95f-11e5-ba19-000c29facd78' on node '172.24.114.32'
[AfterEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:09:22.859: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:09:22.865: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:09:22.865: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:09:22.865: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:09:22.865: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:09:22.865: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:09:22.865: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-ztoh0" for this suite.
• [SLOW TEST:9.166 seconds]
Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:774
should be schedule with cpu and memory limits [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:183
------------------------------
SSS
------------------------------
SchedulerPredicates
validates resource limits of pods that are allowed to run [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:349
[BeforeEach] SchedulerPredicates
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:179
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
Jan 12 11:09:27.894: INFO: Waiting for terminating namespaces to be deleted...
Jan 12 11:09:27.905: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-sched-pred-ky5pg
Jan 12 11:09:27.912: INFO: Get service account default in ns e2e-tests-sched-pred-ky5pg failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:09:29.928: INFO: Service account default in ns e2e-tests-sched-pred-ky5pg with secrets found. (2.022468354s)
[It] validates resource limits of pods that are allowed to run [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:349
Jan 12 11:09:29.940: INFO: Pod kube-proxy-172.24.114.31 requesting capacity 0 on Node 172.24.114.31
Jan 12 11:09:29.940: INFO: Pod kube-proxy-172.24.114.32 requesting capacity 0 on Node 172.24.114.32
Jan 12 11:09:29.940: INFO: Pod kube-dns-v9-u5dha requesting capacity 310 on Node 172.24.114.31
Jan 12 11:09:29.940: INFO: Pod kube-ui-v4-yftcr requesting capacity 100 on Node 172.24.114.32
Jan 12 11:09:29.940: INFO: Node: 172.24.114.31 has capacity: 690
Jan 12 11:09:29.940: INFO: Node: 172.24.114.32 has capacity: 900
STEP: Starting additional 2 Pods to fully saturate the cluster CPU and trying to start another one
Jan 12 11:09:30.008: INFO: 4 pods running
Jan 12 11:09:35.038: INFO: 6 pods running
Jan 12 11:09:40.049: INFO: Sleeping 10 seconds and crossing our fingers that scheduler will run in that time.
STEP: Removing all pods in namespace e2e-tests-sched-pred-ky5pg
[AfterEach] SchedulerPredicates
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:193
STEP: Destroying namespace for this suite e2e-tests-sched-pred-ky5pg
• [SLOW TEST:27.472 seconds]
SchedulerPredicates
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:390
validates resource limits of pods that are allowed to run [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:349
------------------------------
Kubectl client Kubectl describe
should check if kubectl describe prints relevant information for rc and pods [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:570
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
Jan 12 11:09:55.369: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-kxlj8
Jan 12 11:09:55.374: INFO: Get service account default in ns e2e-tests-kubectl-kxlj8 failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:09:57.378: INFO: Service account default in ns e2e-tests-kubectl-kxlj8 with secrets found. (2.008510035s)
[It] should check if kubectl describe prints relevant information for rc and pods [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:570
Jan 12 11:09:57.378: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config create -f /home/gulfstream/repos/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-kxlj8'
Jan 12 11:09:57.622: INFO: replicationcontroller "redis-master" created
Jan 12 11:09:57.622: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config create -f /home/gulfstream/repos/kubernetes/examples/guestbook-go/redis-master-service.json --namespace=e2e-tests-kubectl-kxlj8'
Jan 12 11:09:57.903: INFO: service "redis-master" created
Jan 12 11:09:57.929: INFO: Waiting up to 5m0s for pod redis-master-hqgnb status to be running
Jan 12 11:09:57.940: INFO: Waiting for pod redis-master-hqgnb in namespace 'e2e-tests-kubectl-kxlj8' status to be 'running'(found phase: "Pending", readiness: false) (10.439679ms elapsed)
Jan 12 11:09:59.961: INFO: Found pod 'redis-master-hqgnb' on node '172.24.114.32'
Jan 12 11:09:59.961: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config describe pod redis-master-hqgnb --namespace=e2e-tests-kubectl-kxlj8'
Jan 12 11:10:00.174: INFO: Name: redis-master-hqgnb
Namespace: e2e-tests-kubectl-kxlj8
Image(s): redis
Node: 172.24.114.32/172.24.114.32
Start Time: Tue, 12 Jan 2016 11:13:27 -0800
Labels: app=redis,role=master
Status: Running
Reason:
Message:
IP: 192.168.0.65
Replication Controllers: redis-master (1/1 replicas created)
Containers:
redis-master:
Container ID: docker://b5923425cb997358647bf72c71fc232ea98ca35cd37d58cc70580f05e209f87c
Image: redis
Image ID: docker://0c4334bed751599c3884c227c125de9006e9d1c88d36c6b8c25c075b374b2d2d
State: Running
Started: Tue, 12 Jan 2016 11:13:28 -0800
Ready: False
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
default-token-jt0fi:
Type: Secret (a secret that should populate this volume)
SecretName: default-token-jt0fi
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
<invalid> <invalid> 1 {scheduler } Scheduled Successfully assigned redis-master-hqgnb to 172.24.114.32
<invalid> <invalid> 1 {kubelet 172.24.114.32} implicitly required container POD Pulled Container image "gcr.io/google_containers/pause:0.8.0" already present on machine
<invalid> <invalid> 1 {kubelet 172.24.114.32} implicitly required container POD Created Created with docker id b27c08013508
<invalid> <invalid> 1 {kubelet 172.24.114.32} implicitly required container POD Started Started with docker id b27c08013508
<invalid> <invalid> 1 {kubelet 172.24.114.32} spec.containers{redis-master} Pulled Container image "redis" already present on machine
<invalid> <invalid> 1 {kubelet 172.24.114.32} spec.containers{redis-master} Created Created with docker id b5923425cb99
<invalid> <invalid> 1 {kubelet 172.24.114.32} spec.containers{redis-master} Started Started with docker id b5923425cb99
Jan 12 11:10:00.174: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-kxlj8'
Jan 12 11:10:00.446: INFO: Name: redis-master
Namespace: e2e-tests-kubectl-kxlj8
Image(s): redis
Selector: app=redis,role=master
Labels: app=redis,role=master
Replicas: 1 current / 1 desired
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
<invalid> <invalid> 1 {replication-controller } SuccessfulCreate Created pod: redis-master-hqgnb
Jan 12 11:10:00.446: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-kxlj8'
Jan 12 11:10:00.725: INFO: Name: redis-master
Namespace: e2e-tests-kubectl-kxlj8
Labels: app=redis,role=master
Selector: app=redis,role=master
Type: ClusterIP
IP: 10.100.0.120
Port: <unnamed> 6379/TCP
Endpoints:
Session Affinity: None
No events.
Jan 12 11:10:00.734: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config describe node 172.24.114.31'
Jan 12 11:10:01.109: INFO: Name: 172.24.114.31
Labels: kubernetes.io/hostname=172.24.114.31
CreationTimestamp: Mon, 11 Jan 2016 13:57:05 -0800
Phase:
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
──── ────── ───────────────── ────────────────── ────── ───────
OutOfDisk False Tue, 12 Jan 2016 11:13:28 -0800 Mon, 11 Jan 2016 13:57:05 -0800 KubeletHasSufficientDisk kubelet has sufficient disk space available
Ready True Tue, 12 Jan 2016 11:13:28 -0800 Tue, 12 Jan 2016 10:32:21 -0800 KubeletReady kubelet is posting ready status
Addresses: 172.24.114.31,172.24.114.31
Capacity:
cpu: 1
memory: 1021492Ki
pods: 40
System Info:
Machine ID: ab921fae1e3e4ce7b78e4e8efd99addf
System UUID: 1C28A4C8-9A5F-4694-AD1F-8CB51F216E97
Boot ID: 20fb6714-4fa1-4638-9bfe-2ae939a21161
Kernel Version: 4.2.2-coreos-r1
OS Image: CoreOS 835.9.0
Container Runtime Version: docker://1.8.3
Kubelet Version: v1.1.3
Kube-Proxy Version: v1.1.3
ExternalID: 172.24.114.31
Non-terminated Pods: (2 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
───────── ──── ──────────── ────────── ─────────────── ─────────────
default kube-proxy-172.24.114.31 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING)
kube-system kube-dns-v9-u5dha 310m (31%!)(MISSING) 310m (31%!)(MISSING) 170Mi (17%!)(MISSING) 170Mi (17%!)(MISSING)
Allocated resources:
(Total limits may be over 100%!,(MISSING) i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)
CPU Requests CPU Limits Memory Requests Memory Limits
──────────── ────────── ─────────────── ─────────────
310m (31%!)(MISSING) 310m (31%!)(MISSING) 170Mi (17%!)(MISSING) 170Mi (17%!)(MISSING)
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
18h 37m 2 {kubelet 172.24.114.31} NodeReady Node 172.24.114.31 status is now: NodeReady
31m 31m 1 {controllermanager } NodeNotReady Node 172.24.114.31 status is now: NodeNotReady
Jan 12 11:10:01.109: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config describe namespace e2e-tests-kubectl-kxlj8'
Jan 12 11:10:01.393: INFO: Name: e2e-tests-kubectl-kxlj8
Labels: <none>
Status: Active
No resource quota.
No resource limits.
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-kxlj8
• [SLOW TEST:11.052 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:922
Kubectl describe
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:571
should check if kubectl describe prints relevant information for rc and pods [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:570
------------------------------
SSS
------------------------------
Kubectl client Kubectl expose
should create services for rc [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:640
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
Jan 12 11:10:06.422: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-rb9n5
Jan 12 11:10:06.426: INFO: Get service account default in ns e2e-tests-kubectl-rb9n5 failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:10:08.429: INFO: Service account default in ns e2e-tests-kubectl-rb9n5 with secrets found. (2.007279331s)
[It] should create services for rc [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:640
STEP: creating Redis RC
Jan 12 11:10:08.429: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config create -f /home/gulfstream/repos/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-rb9n5'
Jan 12 11:10:08.681: INFO: replicationcontroller "redis-master" created
Jan 12 11:10:10.698: INFO: Waiting up to 5m0s for pod redis-master-p7g7e status to be running
Jan 12 11:10:10.702: INFO: Found pod 'redis-master-p7g7e' on node '172.24.114.32'
Jan 12 11:10:10.702: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config log redis-master-p7g7e redis-master --namespace=e2e-tests-kubectl-rb9n5'
Jan 12 11:10:10.921: INFO: 1:C 12 Jan 19:13:38.504 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
_._
_.-``__ ''-._
_.-`` `. `_. ''-._ Redis 3.0.6 (00000000/0) 64 bit
.-`` .-```. ```\/ _.,_ ''-._
( ' , .-` | `, ) Running in standalone mode
|`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
| `-._ `._ / _.-' | PID: 1
`-._ `-._ `-./ _.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' | http://redis.io
`-._ `-._`-.__.-'_.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' |
`-._ `-._`-.__.-'_.-' _.-'
`-._ `-.__.-' _.-'
`-._ _.-'
`-.__.-'
1:M 12 Jan 19:13:38.505 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 12 Jan 19:13:38.505 # Server started, Redis version 3.0.6
1:M 12 Jan 19:13:38.505 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 12 Jan 19:13:38.505 * The server is now ready to accept connections on port 6379
STEP: exposing RC
Jan 12 11:10:10.921: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-rb9n5'
Jan 12 11:10:11.153: INFO: service "rm2" exposed
Jan 12 11:10:11.181: INFO: Service rm2 in namespace e2e-tests-kubectl-rb9n5 found.
Jan 12 11:10:13.185: INFO: No endpoint found, retrying
Jan 12 11:10:15.191: INFO: No endpoint found, retrying
Jan 12 11:10:17.185: INFO: No endpoint found, retrying
STEP: exposing service
Jan 12 11:10:19.188: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-rb9n5'
Jan 12 11:10:19.411: INFO: service "rm3" exposed
Jan 12 11:10:19.482: INFO: Service rm3 in namespace e2e-tests-kubectl-rb9n5 found.
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-rb9n5
• [SLOW TEST:20.099 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:922
Kubectl expose
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:641
should create services for rc [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:640
------------------------------
Docker Containers
should use the image defaults if command and args are blank [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:53
[BeforeEach] Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:41
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
Jan 12 11:10:26.521: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-containers-f9qe8
Jan 12 11:10:26.525: INFO: Get service account default in ns e2e-tests-containers-f9qe8 failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:10:28.530: INFO: Service account default in ns e2e-tests-containers-f9qe8 with secrets found. (2.008917439s)
[It] should use the image defaults if command and args are blank [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:53
STEP: Creating a pod to test use defaults
Jan 12 11:10:28.543: INFO: Waiting up to 5m0s for pod client-containers-24841a42-b960-11e5-ba19-000c29facd78 status to be success or failure
Jan 12 11:10:28.584: INFO: No Status.Info for container 'test-container' in pod 'client-containers-24841a42-b960-11e5-ba19-000c29facd78' yet
Jan 12 11:10:28.584: INFO: Waiting for pod client-containers-24841a42-b960-11e5-ba19-000c29facd78 in namespace 'e2e-tests-containers-f9qe8' status to be 'success or failure'(found phase: "Pending", readiness: false) (41.065314ms elapsed)
Jan 12 11:10:30.588: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-24841a42-b960-11e5-ba19-000c29facd78' in namespace 'e2e-tests-containers-f9qe8' so far
Jan 12 11:10:30.588: INFO: Waiting for pod client-containers-24841a42-b960-11e5-ba19-000c29facd78 in namespace 'e2e-tests-containers-f9qe8' status to be 'success or failure'(found phase: "Running", readiness: false) (2.044824368s elapsed)
Jan 12 11:10:32.592: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-24841a42-b960-11e5-ba19-000c29facd78' in namespace 'e2e-tests-containers-f9qe8' so far
Jan 12 11:10:32.592: INFO: Waiting for pod client-containers-24841a42-b960-11e5-ba19-000c29facd78 in namespace 'e2e-tests-containers-f9qe8' status to be 'success or failure'(found phase: "Running", readiness: false) (4.048634702s elapsed)
Jan 12 11:10:34.605: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-24841a42-b960-11e5-ba19-000c29facd78' in namespace 'e2e-tests-containers-f9qe8' so far
Jan 12 11:10:34.605: INFO: Waiting for pod client-containers-24841a42-b960-11e5-ba19-000c29facd78 in namespace 'e2e-tests-containers-f9qe8' status to be 'success or failure'(found phase: "Running", readiness: false) (6.062248014s elapsed)
Jan 12 11:10:36.625: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-24841a42-b960-11e5-ba19-000c29facd78' in namespace 'e2e-tests-containers-f9qe8' so far
Jan 12 11:10:36.625: INFO: Waiting for pod client-containers-24841a42-b960-11e5-ba19-000c29facd78 in namespace 'e2e-tests-containers-f9qe8' status to be 'success or failure'(found phase: "Running", readiness: false) (8.081911267s elapsed)
Jan 12 11:10:38.645: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-24841a42-b960-11e5-ba19-000c29facd78' in namespace 'e2e-tests-containers-f9qe8' so far
Jan 12 11:10:38.645: INFO: Waiting for pod client-containers-24841a42-b960-11e5-ba19-000c29facd78 in namespace 'e2e-tests-containers-f9qe8' status to be 'success or failure'(found phase: "Running", readiness: false) (10.102062164s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 172.24.114.32 pod client-containers-24841a42-b960-11e5-ba19-000c29facd78 container test-container: <nil>
STEP: Successfully fetched pod logs:[/ep default arguments]
[AfterEach] Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:47
• [SLOW TEST:19.260 seconds]
Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:84
should use the image defaults if command and args are blank [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:53
------------------------------
S
------------------------------
Docker Containers
should be able to override the image's default commmand (docker entrypoint) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:73
[BeforeEach] Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:41
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
Jan 12 11:10:45.783: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-containers-bok3m
Jan 12 11:10:45.785: INFO: Get service account default in ns e2e-tests-containers-bok3m failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:10:47.788: INFO: Service account default in ns e2e-tests-containers-bok3m with secrets found. (2.004619761s)
[It] should be able to override the image's default commmand (docker entrypoint) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:73
STEP: Creating a pod to test override command
Jan 12 11:10:47.797: INFO: Waiting up to 5m0s for pod client-containers-2ffe97b7-b960-11e5-ba19-000c29facd78 status to be success or failure
Jan 12 11:10:47.804: INFO: No Status.Info for container 'test-container' in pod 'client-containers-2ffe97b7-b960-11e5-ba19-000c29facd78' yet
Jan 12 11:10:47.804: INFO: Waiting for pod client-containers-2ffe97b7-b960-11e5-ba19-000c29facd78 in namespace 'e2e-tests-containers-bok3m' status to be 'success or failure'(found phase: "Pending", readiness: false) (7.123246ms elapsed)
Jan 12 11:10:49.808: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-2ffe97b7-b960-11e5-ba19-000c29facd78' in namespace 'e2e-tests-containers-bok3m' so far
Jan 12 11:10:49.808: INFO: Waiting for pod client-containers-2ffe97b7-b960-11e5-ba19-000c29facd78 in namespace 'e2e-tests-containers-bok3m' status to be 'success or failure'(found phase: "Running", readiness: false) (2.011752793s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 172.24.114.32 pod client-containers-2ffe97b7-b960-11e5-ba19-000c29facd78 container test-container: <nil>
STEP: Successfully fetched pod logs:[/ep-2]
[AfterEach] Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:47
• [SLOW TEST:11.262 seconds]
Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:84
should be able to override the image's default commmand (docker entrypoint) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:73
------------------------------
Pods
should *not* be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:512
[BeforeEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:10:57.052: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-5rsoh
Jan 12 11:10:57.059: INFO: Get service account default in ns e2e-tests-pods-5rsoh failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:10:59.093: INFO: Service account default in ns e2e-tests-pods-5rsoh with secrets found. (2.041105473s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:10:59.093: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-5rsoh
Jan 12 11:10:59.097: INFO: Service account default in ns e2e-tests-pods-5rsoh with secrets found. (4.172825ms)
[It] should *not* be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:512
STEP: Creating pod liveness-exec in namespace e2e-tests-pods-5rsoh
Jan 12 11:10:59.112: INFO: Waiting up to 5m0s for pod liveness-exec status to be !pending
Jan 12 11:10:59.120: INFO: Waiting for pod liveness-exec in namespace 'e2e-tests-pods-5rsoh' status to be '!pending'(found phase: "Pending", readiness: false) (7.592474ms elapsed)
Jan 12 11:11:01.125: INFO: Saw pod 'liveness-exec' in namespace 'e2e-tests-pods-5rsoh' out of pending state (found '"Running"')
STEP: Started pod liveness-exec in namespace e2e-tests-pods-5rsoh
STEP: checking the pod's current state and verifying that restartCount is present
STEP: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:13:01.790: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:13:01.802: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:13:01.802: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:13:01.802: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:13:01.802: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:13:01.802: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:13:01.802: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-5rsoh" for this suite.
• [SLOW TEST:129.841 seconds]
Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:774
should *not* be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:512
------------------------------
SS
------------------------------
Kubectl client Kubectl run pod
should create a pod from an image when restart is OnFailure [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:849
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
Jan 12 11:13:06.883: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-fadef
Jan 12 11:13:06.886: INFO: Get service account default in ns e2e-tests-kubectl-fadef failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:13:08.890: INFO: Service account default in ns e2e-tests-kubectl-fadef with secrets found. (2.00759787s)
[BeforeEach] Kubectl run pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:826
[It] should create a pod from an image when restart is OnFailure [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:849
STEP: running the image nginx
Jan 12 11:13:08.890: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config run e2e-test-nginx-pod --restart=OnFailure --image=nginx --namespace=e2e-tests-kubectl-fadef'
Jan 12 11:13:09.092: INFO: pod "e2e-test-nginx-pod" created
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] Kubectl run pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:830
Jan 12 11:13:09.096: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config stop pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-fadef'
Jan 12 11:13:09.357: INFO: pod "e2e-test-nginx-pod" deleted
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-fadef
• [SLOW TEST:7.569 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:922
Kubectl run pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:870
should create a pod from an image when restart is OnFailure [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:849
------------------------------
Kubectl client Update Demo
should scale a replication controller [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:125
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
Jan 12 11:13:14.456: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-6s67t
Jan 12 11:13:14.465: INFO: Service account default in ns e2e-tests-kubectl-6s67t had 0 secrets, ignoring for 2s: <nil>
Jan 12 11:13:16.469: INFO: Service account default in ns e2e-tests-kubectl-6s67t with secrets found. (2.012386166s)
[BeforeEach] Update Demo
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:104
[It] should scale a replication controller [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:125
STEP: creating a replication controller
Jan 12 11:13:16.469: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config create -f /home/gulfstream/repos/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-6s67t'
Jan 12 11:13:16.763: INFO: replicationcontroller "update-demo-nautilus" created
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 12 11:13:16.763: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-6s67t'
Jan 12 11:13:17.045: INFO: update-demo-nautilus-89tuj update-demo-nautilus-v92gw
Jan 12 11:13:17.045: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods update-demo-nautilus-89tuj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-6s67t'
Jan 12 11:13:17.278: INFO:
Jan 12 11:13:17.278: INFO: update-demo-nautilus-89tuj is created but not running
Jan 12 11:13:22.278: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-6s67t'
Jan 12 11:13:22.518: INFO: update-demo-nautilus-89tuj update-demo-nautilus-v92gw
Jan 12 11:13:22.518: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods update-demo-nautilus-89tuj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-6s67t'
Jan 12 11:13:22.735: INFO: true
Jan 12 11:13:22.736: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods update-demo-nautilus-89tuj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-6s67t'
Jan 12 11:13:22.953: INFO: gcr.io/google_containers/update-demo:nautilus
Jan 12 11:13:22.953: INFO: validating pod update-demo-nautilus-89tuj
Jan 12 11:13:22.961: INFO: got data: {
"image": "nautilus.jpg"
}
Jan 12 11:13:22.961: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 12 11:13:22.961: INFO: update-demo-nautilus-89tuj is verified up and running
Jan 12 11:13:22.961: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods update-demo-nautilus-v92gw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-6s67t'
Jan 12 11:13:23.173: INFO: true
Jan 12 11:13:23.173: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods update-demo-nautilus-v92gw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-6s67t'
Jan 12 11:13:23.406: INFO: gcr.io/google_containers/update-demo:nautilus
Jan 12 11:13:23.406: INFO: validating pod update-demo-nautilus-v92gw
Jan 12 11:13:23.414: INFO: got data: {
"image": "nautilus.jpg"
}
Jan 12 11:13:23.415: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 12 11:13:23.415: INFO: update-demo-nautilus-v92gw is verified up and running
STEP: scaling down the replication controller
Jan 12 11:13:23.415: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-6s67t'
Jan 12 11:13:25.669: INFO: replicationcontroller "update-demo-nautilus" scaled
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 12 11:13:25.669: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-6s67t'
Jan 12 11:13:25.902: INFO: update-demo-nautilus-89tuj update-demo-nautilus-v92gw
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 12 11:13:30.903: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-6s67t'
Jan 12 11:13:31.098: INFO: update-demo-nautilus-v92gw
Jan 12 11:13:31.098: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods update-demo-nautilus-v92gw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-6s67t'
Jan 12 11:13:31.327: INFO: true
Jan 12 11:13:31.327: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods update-demo-nautilus-v92gw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-6s67t'
Jan 12 11:13:31.510: INFO: gcr.io/google_containers/update-demo:nautilus
Jan 12 11:13:31.510: INFO: validating pod update-demo-nautilus-v92gw
Jan 12 11:13:31.513: INFO: got data: {
"image": "nautilus.jpg"
}
Jan 12 11:13:31.513: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 12 11:13:31.513: INFO: update-demo-nautilus-v92gw is verified up and running
STEP: scaling up the replication controller
Jan 12 11:13:31.513: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-6s67t'
Jan 12 11:13:33.796: INFO: replicationcontroller "update-demo-nautilus" scaled
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 12 11:13:33.796: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-6s67t'
Jan 12 11:13:34.019: INFO: update-demo-nautilus-u1tja update-demo-nautilus-v92gw
Jan 12 11:13:34.020: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods update-demo-nautilus-u1tja -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-6s67t'
Jan 12 11:13:34.281: INFO: true
Jan 12 11:13:34.281: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods update-demo-nautilus-u1tja -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-6s67t'
Jan 12 11:13:34.552: INFO: gcr.io/google_containers/update-demo:nautilus
Jan 12 11:13:34.552: INFO: validating pod update-demo-nautilus-u1tja
Jan 12 11:13:34.592: INFO: got data: {
"image": "nautilus.jpg"
}
Jan 12 11:13:34.592: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 12 11:13:34.592: INFO: update-demo-nautilus-u1tja is verified up and running
Jan 12 11:13:34.593: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods update-demo-nautilus-v92gw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-6s67t'
Jan 12 11:13:34.836: INFO: true
Jan 12 11:13:34.836: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods update-demo-nautilus-v92gw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-6s67t'
Jan 12 11:13:35.118: INFO: gcr.io/google_containers/update-demo:nautilus
Jan 12 11:13:35.118: INFO: validating pod update-demo-nautilus-v92gw
Jan 12 11:13:35.122: INFO: got data: {
"image": "nautilus.jpg"
}
Jan 12 11:13:35.122: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 12 11:13:35.122: INFO: update-demo-nautilus-v92gw is verified up and running
STEP: using delete to clean up resources
Jan 12 11:13:35.122: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config stop --grace-period=0 -f /home/gulfstream/repos/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-6s67t'
Jan 12 11:13:37.503: INFO: replicationcontroller "update-demo-nautilus" deleted
Jan 12 11:13:37.503: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-6s67t'
Jan 12 11:13:37.735: INFO:
Jan 12 11:13:37.735: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-6s67t -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 12 11:13:37.966: INFO:
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-6s67t
• [SLOW TEST:28.560 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:922
Update Demo
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:136
should scale a replication controller [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:125
------------------------------
SSSSSS
------------------------------
EmptyDir volumes
should support (non-root,0777,default) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:94
[BeforeEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:13:43.018: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-r071b
Jan 12 11:13:43.024: INFO: Get service account default in ns e2e-tests-emptydir-r071b failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:13:45.027: INFO: Service account default in ns e2e-tests-emptydir-r071b with secrets found. (2.009312084s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:13:45.027: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-r071b
Jan 12 11:13:45.030: INFO: Service account default in ns e2e-tests-emptydir-r071b with secrets found. (2.783944ms)
[It] should support (non-root,0777,default) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:94
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 12 11:13:45.036: INFO: Waiting up to 5m0s for pod pod-99a3aa50-b960-11e5-ba19-000c29facd78 status to be success or failure
Jan 12 11:13:45.042: INFO: No Status.Info for container 'test-container' in pod 'pod-99a3aa50-b960-11e5-ba19-000c29facd78' yet
Jan 12 11:13:45.042: INFO: Waiting for pod pod-99a3aa50-b960-11e5-ba19-000c29facd78 in namespace 'e2e-tests-emptydir-r071b' status to be 'success or failure'(found phase: "Pending", readiness: false) (5.159849ms elapsed)
Jan 12 11:13:47.046: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-99a3aa50-b960-11e5-ba19-000c29facd78' in namespace 'e2e-tests-emptydir-r071b' so far
Jan 12 11:13:47.046: INFO: Waiting for pod pod-99a3aa50-b960-11e5-ba19-000c29facd78 in namespace 'e2e-tests-emptydir-r071b' status to be 'success or failure'(found phase: "Running", readiness: false) (2.009833287s elapsed)
Jan 12 11:13:49.051: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-99a3aa50-b960-11e5-ba19-000c29facd78' in namespace 'e2e-tests-emptydir-r071b' so far
Jan 12 11:13:49.051: INFO: Waiting for pod pod-99a3aa50-b960-11e5-ba19-000c29facd78 in namespace 'e2e-tests-emptydir-r071b' status to be 'success or failure'(found phase: "Running", readiness: false) (4.014519147s elapsed)
Jan 12 11:13:51.063: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-99a3aa50-b960-11e5-ba19-000c29facd78' in namespace 'e2e-tests-emptydir-r071b' so far
Jan 12 11:13:51.063: INFO: Waiting for pod pod-99a3aa50-b960-11e5-ba19-000c29facd78 in namespace 'e2e-tests-emptydir-r071b' status to be 'success or failure'(found phase: "Running", readiness: false) (6.026792269s elapsed)
Jan 12 11:13:53.069: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-99a3aa50-b960-11e5-ba19-000c29facd78' in namespace 'e2e-tests-emptydir-r071b' so far
Jan 12 11:13:53.069: INFO: Waiting for pod pod-99a3aa50-b960-11e5-ba19-000c29facd78 in namespace 'e2e-tests-emptydir-r071b' status to be 'success or failure'(found phase: "Running", readiness: false) (8.032442533s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 172.24.114.32 pod pod-99a3aa50-b960-11e5-ba19-000c29facd78 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rwxrwxrwx
[AfterEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:13:55.131: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:13:55.140: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:13:55.141: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:13:55.141: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:13:55.141: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:13:55.141: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:13:55.141: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-r071b" for this suite.
• [SLOW TEST:17.161 seconds]
EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (non-root,0777,default) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:94
------------------------------
Kubectl client Guestbook application
should create and stop a working application [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:153
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
Jan 12 11:14:00.199: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-snpmx
Jan 12 11:14:00.213: INFO: Service account default in ns e2e-tests-kubectl-snpmx had 0 secrets, ignoring for 2s: <nil>
Jan 12 11:14:02.218: INFO: Service account default in ns e2e-tests-kubectl-snpmx with secrets found. (2.018177706s)
[BeforeEach] Guestbook application
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:143
[It] should create and stop a working application [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:153
STEP: creating all guestbook components
Jan 12 11:14:02.218: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config create -f /home/gulfstream/repos/kubernetes/examples/guestbook --namespace=e2e-tests-kubectl-snpmx'
Jan 12 11:14:03.399: INFO: replicationcontroller "frontend" created
service "frontend" created
replicationcontroller "redis-master" created
service "redis-master" created
replicationcontroller "redis-slave" created
service "redis-slave" created
STEP: validating guestbook app
Jan 12 11:14:03.399: INFO: Waiting for frontend to serve content.
Jan 12 11:14:19.052: INFO: Trying to add a new entry to the guestbook.
Jan 12 11:14:19.070: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jan 12 11:15:14.418: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config stop --grace-period=0 -f /home/gulfstream/repos/kubernetes/examples/guestbook --namespace=e2e-tests-kubectl-snpmx'
Jan 12 11:15:21.128: INFO: replicationcontroller "frontend" deleted
service "frontend" deleted
replicationcontroller "redis-master" deleted
service "redis-master" deleted
replicationcontroller "redis-slave" deleted
service "redis-slave" deleted
Jan 12 11:15:21.128: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get rc,svc -l name=frontend --no-headers --namespace=e2e-tests-kubectl-snpmx'
Jan 12 11:15:21.319: INFO:
Jan 12 11:15:21.320: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods -l name=frontend --namespace=e2e-tests-kubectl-snpmx -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 12 11:15:21.541: INFO:
Jan 12 11:15:21.541: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get rc,svc -l name=redis-master --no-headers --namespace=e2e-tests-kubectl-snpmx'
Jan 12 11:15:21.801: INFO:
Jan 12 11:15:21.801: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods -l name=redis-master --namespace=e2e-tests-kubectl-snpmx -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 12 11:15:22.030: INFO:
Jan 12 11:15:22.030: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get rc,svc -l name=redis-slave --no-headers --namespace=e2e-tests-kubectl-snpmx'
Jan 12 11:15:22.265: INFO:
Jan 12 11:15:22.265: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods -l name=redis-slave --namespace=e2e-tests-kubectl-snpmx -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 12 11:15:22.477: INFO:
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-snpmx
• [SLOW TEST:87.365 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:922
Guestbook application
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:154
should create and stop a working application [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:153
------------------------------
EmptyDir volumes
should support (root,0666,tmpfs) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:50
[BeforeEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:15:27.541: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-1tpy4
Jan 12 11:15:27.545: INFO: Get service account default in ns e2e-tests-emptydir-1tpy4 failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:15:29.549: INFO: Service account default in ns e2e-tests-emptydir-1tpy4 with secrets found. (2.008215646s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:15:29.549: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-1tpy4
Jan 12 11:15:29.551: INFO: Service account default in ns e2e-tests-emptydir-1tpy4 with secrets found. (2.127349ms)
[It] should support (root,0666,tmpfs) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:50
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 12 11:15:29.557: INFO: Waiting up to 5m0s for pod pod-d7f053ac-b960-11e5-ba19-000c29facd78 status to be success or failure
Jan 12 11:15:29.563: INFO: No Status.Info for container 'test-container' in pod 'pod-d7f053ac-b960-11e5-ba19-000c29facd78' yet
Jan 12 11:15:29.563: INFO: Waiting for pod pod-d7f053ac-b960-11e5-ba19-000c29facd78 in namespace 'e2e-tests-emptydir-1tpy4' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.186314ms elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 172.24.114.32 pod pod-d7f053ac-b960-11e5-ba19-000c29facd78 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-rw-rw-
[AfterEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:15:31.613: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:15:31.674: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:15:31.674: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:15:31.674: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:15:31.674: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:15:31.674: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:15:31.674: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-1tpy4" for this suite.
• [SLOW TEST:9.201 seconds]
EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (root,0666,tmpfs) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:50
------------------------------
Kubectl client Kubectl api-versions
should check if v1 is in available api versions [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:444
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
Jan 12 11:15:36.742: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-jbwef
Jan 12 11:15:36.748: INFO: Get service account default in ns e2e-tests-kubectl-jbwef failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:15:38.752: INFO: Service account default in ns e2e-tests-kubectl-jbwef with secrets found. (2.009739926s)
[It] should check if v1 is in available api versions [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:444
STEP: validating api verions
Jan 12 11:15:38.752: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config api-versions'
Jan 12 11:15:38.997: INFO: extensions/v1beta1
v1
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-jbwef
• [SLOW TEST:7.288 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:922
Kubectl api-versions
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:445
should check if v1 is in available api versions [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:444
------------------------------
SS
------------------------------
hostPath
should support r/w [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/host_path.go:103
[BeforeEach] hostPath
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/host_path.go:53
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:15:44.032: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-hostpath-59hk4
Jan 12 11:15:44.037: INFO: Get service account default in ns e2e-tests-hostpath-59hk4 failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:15:46.043: INFO: Service account default in ns e2e-tests-hostpath-59hk4 with secrets found. (2.010979707s)
[It] should support r/w [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/host_path.go:103
STEP: Creating a pod to test hostPath r/w
Jan 12 11:15:46.055: INFO: Waiting up to 5m0s for pod pod-host-path-test status to be success or failure
Jan 12 11:15:46.061: INFO: No Status.Info for container 'test-container-1' in pod 'pod-host-path-test' yet
Jan 12 11:15:46.062: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-59hk4' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.155317ms elapsed)
STEP: Saw pod success
Jan 12 11:15:48.079: INFO: Waiting up to 5m0s for pod pod-host-path-test status to be success or failure
Jan 12 11:15:48.088: INFO: Nil State.Terminated for container 'test-container-2' in pod 'pod-host-path-test' in namespace 'e2e-tests-hostpath-59hk4' so far
Jan 12 11:15:48.088: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-59hk4' status to be 'success or failure'(found phase: "Running", readiness: false) (9.405025ms elapsed)
Jan 12 11:15:50.094: INFO: Nil State.Terminated for container 'test-container-2' in pod 'pod-host-path-test' in namespace 'e2e-tests-hostpath-59hk4' so far
Jan 12 11:15:50.094: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-59hk4' status to be 'success or failure'(found phase: "Running", readiness: false) (2.015241689s elapsed)
Jan 12 11:15:52.099: INFO: Nil State.Terminated for container 'test-container-2' in pod 'pod-host-path-test' in namespace 'e2e-tests-hostpath-59hk4' so far
Jan 12 11:15:52.099: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-59hk4' status to be 'success or failure'(found phase: "Running", readiness: false) (4.019726349s elapsed)
Jan 12 11:15:54.104: INFO: Nil State.Terminated for container 'test-container-2' in pod 'pod-host-path-test' in namespace 'e2e-tests-hostpath-59hk4' so far
Jan 12 11:15:54.105: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-59hk4' status to be 'success or failure'(found phase: "Running", readiness: false) (6.025533285s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 172.24.114.32 pod pod-host-path-test container test-container-2: <nil>
STEP: Successfully fetched pod logs:content of file "/test-volume/test-file": mount-tester new file
[AfterEach] hostPath
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/host_path.go:60
STEP: Destroying namespace for this suite e2e-tests-hostpath-59hk4
• [SLOW TEST:17.145 seconds]
hostPath
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/host_path.go:104
should support r/w [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/host_path.go:103
------------------------------
SS
------------------------------
Kubectl client Proxy server
should support --unix-socket=/path [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:920
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
Jan 12 11:16:01.179: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-7xuzj
Jan 12 11:16:01.189: INFO: Service account default in ns e2e-tests-kubectl-7xuzj had 0 secrets, ignoring for 2s: <nil>
Jan 12 11:16:03.192: INFO: Service account default in ns e2e-tests-kubectl-7xuzj with secrets found. (2.013193993s)
[It] should support --unix-socket=/path [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:920
STEP: Starting the proxy
Jan 12 11:16:03.193: INFO: Asynchronously running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix163380268/test'
STEP: retrieving proxy /api/ output
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-7xuzj
• [SLOW TEST:7.274 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:922
Proxy server
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:921
should support --unix-socket=/path [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:920
------------------------------
Pods
should contain environment variables for services [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:460
[BeforeEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:16:08.453: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-3fgiy
Jan 12 11:16:08.459: INFO: Get service account default in ns e2e-tests-pods-3fgiy failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:16:10.463: INFO: Service account default in ns e2e-tests-pods-3fgiy with secrets found. (2.010275744s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:16:10.463: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-3fgiy
Jan 12 11:16:10.465: INFO: Service account default in ns e2e-tests-pods-3fgiy with secrets found. (2.274471ms)
[It] should contain environment variables for services [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:460
Jan 12 11:16:10.472: INFO: Waiting up to 5m0s for pod server-envvars-f05351e6-b960-11e5-ba19-000c29facd78 status to be running
Jan 12 11:16:10.478: INFO: Waiting for pod server-envvars-f05351e6-b960-11e5-ba19-000c29facd78 in namespace 'e2e-tests-pods-3fgiy' status to be 'running'(found phase: "Pending", readiness: false) (5.124656ms elapsed)
Jan 12 11:16:12.486: INFO: Found pod 'server-envvars-f05351e6-b960-11e5-ba19-000c29facd78' on node '172.24.114.32'
STEP: Creating a pod to test service env
Jan 12 11:16:12.607: INFO: Waiting up to 5m0s for pod client-envvars-f18b4e4d-b960-11e5-ba19-000c29facd78 status to be success or failure
Jan 12 11:16:12.616: INFO: No Status.Info for container 'env3cont' in pod 'client-envvars-f18b4e4d-b960-11e5-ba19-000c29facd78' yet
Jan 12 11:16:12.616: INFO: Waiting for pod client-envvars-f18b4e4d-b960-11e5-ba19-000c29facd78 in namespace 'e2e-tests-pods-3fgiy' status to be 'success or failure'(found phase: "Pending", readiness: false) (9.206825ms elapsed)
Jan 12 11:16:14.623: INFO: Nil State.Terminated for container 'env3cont' in pod 'client-envvars-f18b4e4d-b960-11e5-ba19-000c29facd78' in namespace 'e2e-tests-pods-3fgiy' so far
Jan 12 11:16:14.623: INFO: Waiting for pod client-envvars-f18b4e4d-b960-11e5-ba19-000c29facd78 in namespace 'e2e-tests-pods-3fgiy' status to be 'success or failure'(found phase: "Running", readiness: false) (2.015823999s elapsed)
Jan 12 11:16:16.628: INFO: Nil State.Terminated for container 'env3cont' in pod 'client-envvars-f18b4e4d-b960-11e5-ba19-000c29facd78' in namespace 'e2e-tests-pods-3fgiy' so far
Jan 12 11:16:16.628: INFO: Waiting for pod client-envvars-f18b4e4d-b960-11e5-ba19-000c29facd78 in namespace 'e2e-tests-pods-3fgiy' status to be 'success or failure'(found phase: "Running", readiness: false) (4.021417912s elapsed)
Jan 12 11:16:18.633: INFO: Nil State.Terminated for container 'env3cont' in pod 'client-envvars-f18b4e4d-b960-11e5-ba19-000c29facd78' in namespace 'e2e-tests-pods-3fgiy' so far
Jan 12 11:16:18.633: INFO: Waiting for pod client-envvars-f18b4e4d-b960-11e5-ba19-000c29facd78 in namespace 'e2e-tests-pods-3fgiy' status to be 'success or failure'(found phase: "Running", readiness: false) (6.02629573s elapsed)
Jan 12 11:16:20.637: INFO: Nil State.Terminated for container 'env3cont' in pod 'client-envvars-f18b4e4d-b960-11e5-ba19-000c29facd78' in namespace 'e2e-tests-pods-3fgiy' so far
Jan 12 11:16:20.637: INFO: Waiting for pod client-envvars-f18b4e4d-b960-11e5-ba19-000c29facd78 in namespace 'e2e-tests-pods-3fgiy' status to be 'success or failure'(found phase: "Running", readiness: false) (8.030590642s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 172.24.114.31 pod client-envvars-f18b4e4d-b960-11e5-ba19-000c29facd78 container env3cont: <nil>
STEP: Successfully fetched pod logs:KUBERNETES_PORT=tcp://10.100.0.1:443
KUBERNETES_SERVICE_PORT=443
FOOSERVICE_PORT_8765_TCP_PORT=8765
FOOSERVICE_PORT_8765_TCP_PROTO=tcp
HOSTNAME=client-envvars-f18b4e4d-b960-11e5-ba19-000c29facd78
SHLVL=1
HOME=/root
FOOSERVICE_PORT_8765_TCP=tcp://10.100.0.139:8765
KUBERNETES_PORT_443_TCP_ADDR=10.100.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
FOOSERVICE_SERVICE_HOST=10.100.0.139
KUBERNETES_PORT_443_TCP=tcp://10.100.0.1:443
PWD=/
KUBERNETES_SERVICE_HOST=10.100.0.1
FOOSERVICE_PORT=tcp://10.100.0.139:8765
FOOSERVICE_SERVICE_PORT=8765
FOOSERVICE_PORT_8765_TCP_ADDR=10.100.0.139
[AfterEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:16:22.953: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:16:22.963: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:16:22.963: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:16:22.963: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:16:22.963: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:16:22.963: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:16:22.963: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-3fgiy" for this suite.
• [SLOW TEST:19.545 seconds]
Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:774
should contain environment variables for services [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:460
------------------------------
Kubectl client Update Demo
should do a rolling update of a replication controller [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:135
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
Jan 12 11:16:27.995: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-d7fbe
Jan 12 11:16:28.001: INFO: Get service account default in ns e2e-tests-kubectl-d7fbe failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:16:30.006: INFO: Service account default in ns e2e-tests-kubectl-d7fbe with secrets found. (2.010234888s)
[BeforeEach] Update Demo
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:104
[It] should do a rolling update of a replication controller [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:135
STEP: creating the initial replication controller
Jan 12 11:16:30.006: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config create -f /home/gulfstream/repos/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-d7fbe'
Jan 12 11:16:30.293: INFO: replicationcontroller "update-demo-nautilus" created
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 12 11:16:30.293: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-d7fbe'
Jan 12 11:16:30.507: INFO: update-demo-nautilus-fb2ah update-demo-nautilus-g6le9
Jan 12 11:16:30.507: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods update-demo-nautilus-fb2ah -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-d7fbe'
Jan 12 11:16:30.755: INFO:
Jan 12 11:16:30.755: INFO: update-demo-nautilus-fb2ah is created but not running
Jan 12 11:16:35.755: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-d7fbe'
Jan 12 11:16:35.959: INFO: update-demo-nautilus-fb2ah update-demo-nautilus-g6le9
Jan 12 11:16:35.959: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods update-demo-nautilus-fb2ah -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-d7fbe'
Jan 12 11:16:36.175: INFO: true
Jan 12 11:16:36.176: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods update-demo-nautilus-fb2ah -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-d7fbe'
Jan 12 11:16:36.377: INFO: gcr.io/google_containers/update-demo:nautilus
Jan 12 11:16:36.377: INFO: validating pod update-demo-nautilus-fb2ah
Jan 12 11:16:36.383: INFO: got data: {
"image": "nautilus.jpg"
}
Jan 12 11:16:36.383: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 12 11:16:36.383: INFO: update-demo-nautilus-fb2ah is verified up and running
Jan 12 11:16:36.383: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods update-demo-nautilus-g6le9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-d7fbe'
Jan 12 11:16:36.605: INFO: true
Jan 12 11:16:36.606: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods update-demo-nautilus-g6le9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-d7fbe'
Jan 12 11:16:36.844: INFO: gcr.io/google_containers/update-demo:nautilus
Jan 12 11:16:36.844: INFO: validating pod update-demo-nautilus-g6le9
Jan 12 11:16:36.850: INFO: got data: {
"image": "nautilus.jpg"
}
Jan 12 11:16:36.850: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 12 11:16:36.850: INFO: update-demo-nautilus-g6le9 is verified up and running
STEP: rolling-update to new replication controller
Jan 12 11:16:36.850: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config rolling-update update-demo-nautilus --update-period=1s -f /home/gulfstream/repos/kubernetes/docs/user-guide/update-demo/kitten-rc.yaml --namespace=e2e-tests-kubectl-d7fbe'
Jan 12 11:17:16.436: INFO: Created update-demo-kitten
Scaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
Scaling update-demo-kitten up to 1
Scaling update-demo-nautilus down to 1
Scaling update-demo-kitten up to 2
Scaling update-demo-nautilus down to 0
Update succeeded. Deleting update-demo-nautilus
replicationcontroller "update-demo-nautilus" rolling updated to "update-demo-kitten"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 12 11:17:16.436: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-d7fbe'
Jan 12 11:17:16.693: INFO: update-demo-kitten-9zryp update-demo-kitten-cia9z update-demo-nautilus-fb2ah
STEP: Replicas for name=update-demo: expected=2 actual=3
Jan 12 11:17:21.694: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-d7fbe'
Jan 12 11:17:21.915: INFO: update-demo-kitten-9zryp update-demo-kitten-cia9z
Jan 12 11:17:21.915: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods update-demo-kitten-9zryp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-d7fbe'
Jan 12 11:17:22.098: INFO: true
Jan 12 11:17:22.098: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods update-demo-kitten-9zryp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-d7fbe'
Jan 12 11:17:22.332: INFO: gcr.io/google_containers/update-demo:kitten
Jan 12 11:17:22.332: INFO: validating pod update-demo-kitten-9zryp
Jan 12 11:17:22.340: INFO: got data: {
"image": "kitten.jpg"
}
Jan 12 11:17:22.340: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 12 11:17:22.340: INFO: update-demo-kitten-9zryp is verified up and running
Jan 12 11:17:22.340: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods update-demo-kitten-cia9z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-d7fbe'
Jan 12 11:17:22.547: INFO: true
Jan 12 11:17:22.547: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods update-demo-kitten-cia9z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-d7fbe'
Jan 12 11:17:22.803: INFO: gcr.io/google_containers/update-demo:kitten
Jan 12 11:17:22.804: INFO: validating pod update-demo-kitten-cia9z
Jan 12 11:17:22.813: INFO: got data: {
"image": "kitten.jpg"
}
Jan 12 11:17:22.813: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 12 11:17:22.813: INFO: update-demo-kitten-cia9z is verified up and running
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-d7fbe
• [SLOW TEST:59.847 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:922
Update Demo
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:136
should do a rolling update of a replication controller [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:135
------------------------------
Kubectl client Kubectl cluster-info
should check if Kubernetes master services is included in cluster-info [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:480
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
Jan 12 11:17:27.842: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-6b2dg
Jan 12 11:17:27.846: INFO: Get service account default in ns e2e-tests-kubectl-6b2dg failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:17:29.865: INFO: Service account default in ns e2e-tests-kubectl-6b2dg with secrets found. (2.022280432s)
[It] should check if Kubernetes master services is included in cluster-info [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:480
STEP: validating cluster-info
Jan 12 11:17:29.865: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config cluster-info'
Jan 12 11:17:30.073: INFO: Kubernetes master is running at https://172.24.114.18
KubeDNS is running at https://172.24.114.18/api/v1/proxy/namespaces/kube-system/services/kube-dns
KubeUI is running at https://172.24.114.18/api/v1/proxy/namespaces/kube-system/services/kube-ui
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-6b2dg
• [SLOW TEST:7.254 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:922
Kubectl cluster-info
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:481
should check if Kubernetes master services is included in cluster-info [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:480
------------------------------
SS
------------------------------
Probing container
with readiness probe that fails should never be ready and never restart [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:99
[BeforeEach] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:39
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:17:35.100: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-container-probe-rrg67
Jan 12 11:17:35.105: INFO: Get service account default in ns e2e-tests-container-probe-rrg67 failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:17:37.109: INFO: Service account default in ns e2e-tests-container-probe-rrg67 with secrets found. (2.009044427s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:17:37.109: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-container-probe-rrg67
Jan 12 11:17:37.111: INFO: Service account default in ns e2e-tests-container-probe-rrg67 with secrets found. (2.325065ms)
[It] with readiness probe that fails should never be ready and never restart [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:99
[AfterEach] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:41
Jan 12 11:19:07.129: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:19:07.134: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:19:07.134: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:19:07.134: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:19:07.134: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:19:07.134: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:19:07.134: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-container-probe-rrg67" for this suite.
• [SLOW TEST:97.071 seconds]
Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:101
with readiness probe that fails should never be ready and never restart [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:99
------------------------------
S
------------------------------
Networking
should provide unchanging, static URL paths for kubernetes api services [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:103
[BeforeEach] Networking
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:19:12.169: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nettest-fj5go
Jan 12 11:19:12.174: INFO: Get service account default in ns e2e-tests-nettest-fj5go failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:19:14.177: INFO: Service account default in ns e2e-tests-nettest-fj5go with secrets found. (2.008534375s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:19:14.177: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nettest-fj5go
Jan 12 11:19:14.185: INFO: Service account default in ns e2e-tests-nettest-fj5go with secrets found. (7.975342ms)
[BeforeEach] Networking
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:52
STEP: Executing a successful http request from the external internet
[It] should provide unchanging, static URL paths for kubernetes api services [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:103
STEP: testing: /validate
STEP: testing: /healthz
[AfterEach] Networking
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:19:14.305: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:19:14.309: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:19:14.309: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:19:14.310: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:19:14.310: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:19:14.310: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:19:14.310: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-nettest-fj5go" for this suite.
• [SLOW TEST:7.168 seconds]
Networking
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:249
should provide unchanging, static URL paths for kubernetes api services [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:103
------------------------------
hostPath
should give a volume the correct mode [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/host_path.go:77
[BeforeEach] hostPath
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/host_path.go:53
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:19:19.339: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-hostpath-8d4c7
Jan 12 11:19:19.344: INFO: Get service account default in ns e2e-tests-hostpath-8d4c7 failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:19:21.348: INFO: Service account default in ns e2e-tests-hostpath-8d4c7 with secrets found. (2.009159943s)
[It] should give a volume the correct mode [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/host_path.go:77
STEP: Creating a pod to test hostPath mode
Jan 12 11:19:21.356: INFO: Waiting up to 5m0s for pod pod-host-path-test status to be success or failure
Jan 12 11:19:21.361: INFO: No Status.Info for container 'test-container-1' in pod 'pod-host-path-test' yet
Jan 12 11:19:21.361: INFO: Waiting for pod pod-host-path-test in namespace 'e2e-tests-hostpath-8d4c7' status to be 'success or failure'(found phase: "Pending", readiness: false) (5.702856ms elapsed)
STEP: Saw pod success
Jan 12 11:19:23.366: INFO: Waiting up to 5m0s for pod pod-host-path-test status to be success or failure
STEP: Saw pod success
STEP: Trying to get logs from node 172.24.114.32 pod pod-host-path-test container test-container-1: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs
mode of file "/test-volume": dtrwxrwxrwx
[AfterEach] hostPath
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/host_path.go:60
STEP: Destroying namespace for this suite e2e-tests-hostpath-8d4c7
• [SLOW TEST:9.198 seconds]
hostPath
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/host_path.go:104
should give a volume the correct mode [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/host_path.go:77
------------------------------
P [PENDING]
Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:774
should get a host IP [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:146
------------------------------
SSSSSSS
------------------------------
Services
should serve multiport endpoints from pods [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:212
[BeforeEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:19:28.533: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-b0ady
Jan 12 11:19:28.538: INFO: Get service account default in ns e2e-tests-services-b0ady failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:19:30.542: INFO: Service account default in ns e2e-tests-services-b0ady with secrets found. (2.007928192s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:19:30.542: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-b0ady
Jan 12 11:19:30.544: INFO: Service account default in ns e2e-tests-services-b0ady with secrets found. (2.273515ms)
[BeforeEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:54
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
[It] should serve multiport endpoints from pods [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:212
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-b0ady
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-b0ady to expose endpoints map[]
Jan 12 11:19:30.584: INFO: Get endpoints failed (23.66624ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jan 12 11:19:31.588: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-b0ady exposes endpoints map[] (1.027654881s elapsed)
STEP: creating pod pod1 in namespace e2e-tests-services-b0ady
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-b0ady to expose endpoints map[pod1:[100]]
Jan 12 11:19:35.664: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.066282437s elapsed, will retry)
Jan 12 11:19:40.725: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (9.126846482s elapsed, will retry)
Jan 12 11:19:41.733: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-b0ady exposes endpoints map[pod1:[100]] (10.135395185s elapsed)
STEP: creating pod pod2 in namespace e2e-tests-services-b0ady
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-b0ady to expose endpoints map[pod1:[100] pod2:[101]]
Jan 12 11:19:45.834: INFO: Unexpected endpoints: found map[cac0a3f8-b961-11e5-a213-080027dc8cf0:[100]], expected map[pod1:[100] pod2:[101]] (4.090995237s elapsed, will retry)
Jan 12 11:19:47.859: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-b0ady exposes endpoints map[pod1:[100] pod2:[101]] (6.115690348s elapsed)
STEP: deleting pod pod1 in namespace e2e-tests-services-b0ady
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-b0ady to expose endpoints map[pod2:[101]]
Jan 12 11:19:48.911: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-b0ady exposes endpoints map[pod2:[101]] (1.041404354s elapsed)
STEP: deleting pod pod2 in namespace e2e-tests-services-b0ady
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-b0ady to expose endpoints map[]
Jan 12 11:19:50.080: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-b0ady exposes endpoints map[] (1.091854986s elapsed)
[AfterEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:19:50.142: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:19:50.153: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:19:50.153: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:19:50.153: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:19:50.153: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:19:50.153: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:19:50.153: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-services-b0ady" for this suite.
[AfterEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:64
• [SLOW TEST:26.689 seconds]
Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:862
should serve multiport endpoints from pods [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:212
------------------------------
Pods
should be restarted with a /healthz http liveness probe [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:539
[BeforeEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:19:55.224: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-543uo
Jan 12 11:19:55.227: INFO: Get service account default in ns e2e-tests-pods-543uo failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:19:57.231: INFO: Service account default in ns e2e-tests-pods-543uo with secrets found. (2.007077715s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:19:57.231: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-543uo
Jan 12 11:19:57.235: INFO: Service account default in ns e2e-tests-pods-543uo with secrets found. (3.452201ms)
[It] should be restarted with a /healthz http liveness probe [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:539
STEP: Creating pod liveness-http in namespace e2e-tests-pods-543uo
Jan 12 11:19:57.242: INFO: Waiting up to 5m0s for pod liveness-http status to be !pending
Jan 12 11:19:57.247: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-543uo' status to be '!pending'(found phase: "Pending", readiness: false) (4.912829ms elapsed)
Jan 12 11:19:59.251: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-543uo' status to be '!pending'(found phase: "Pending", readiness: false) (2.009252138s elapsed)
Jan 12 11:20:01.256: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-543uo' status to be '!pending'(found phase: "Pending", readiness: false) (4.013504605s elapsed)
Jan 12 11:20:03.261: INFO: Saw pod 'liveness-http' in namespace 'e2e-tests-pods-543uo' out of pending state (found '"Running"')
STEP: Started pod liveness-http in namespace e2e-tests-pods-543uo
STEP: checking the pod's current state and verifying that restartCount is present
STEP: Initial restart count of pod liveness-http is 0
STEP: Restart count of pod e2e-tests-pods-543uo/liveness-http is now 1 (22.08464439s elapsed)
STEP: deleting the pod
[AfterEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:20:25.369: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:20:25.385: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:20:25.385: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:20:25.385: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:20:25.385: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:20:25.385: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:20:25.385: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-543uo" for this suite.
• [SLOW TEST:35.202 seconds]
Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:774
should be restarted with a /healthz http liveness probe [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:539
------------------------------
SSSSS
------------------------------
kube-ui
should check that the kube-ui instance is alive [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube-ui.go:85
[BeforeEach] kube-ui
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:20:30.429: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kube-ui-3350z
Jan 12 11:20:30.434: INFO: Get service account default in ns e2e-tests-kube-ui-3350z failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:20:32.441: INFO: Service account default in ns e2e-tests-kube-ui-3350z with secrets found. (2.011774753s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:20:32.441: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kube-ui-3350z
Jan 12 11:20:32.446: INFO: Service account default in ns e2e-tests-kube-ui-3350z with secrets found. (4.996977ms)
[It] should check that the kube-ui instance is alive [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube-ui.go:85
STEP: Checking the kube-ui service exists.
Jan 12 11:20:32.452: INFO: Service kube-ui in namespace kube-system found.
STEP: Checking to make sure the kube-ui pods are running
STEP: Checking to make sure we get a response from the kube-ui.
STEP: Checking that the ApiServer /ui endpoint redirects to a valid server.
[AfterEach] kube-ui
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:20:39.467: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:20:39.471: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:20:39.471: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:20:39.471: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:20:39.471: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:20:39.471: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:20:39.471: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-kube-ui-3350z" for this suite.
• [SLOW TEST:14.118 seconds]
kube-ui
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube-ui.go:86
should check that the kube-ui instance is alive [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube-ui.go:85
------------------------------
EmptyDir volumes
should support (non-root,0666,tmpfs) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:62
[BeforeEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:20:44.547: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-ewumf
Jan 12 11:20:44.554: INFO: Get service account default in ns e2e-tests-emptydir-ewumf failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:20:46.558: INFO: Service account default in ns e2e-tests-emptydir-ewumf with secrets found. (2.01098028s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:20:46.559: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-ewumf
Jan 12 11:20:46.561: INFO: Service account default in ns e2e-tests-emptydir-ewumf with secrets found. (2.15366ms)
[It] should support (non-root,0666,tmpfs) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:62
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 12 11:20:46.569: INFO: Waiting up to 5m0s for pod pod-94e41fca-b961-11e5-ba19-000c29facd78 status to be success or failure
Jan 12 11:20:46.576: INFO: No Status.Info for container 'test-container' in pod 'pod-94e41fca-b961-11e5-ba19-000c29facd78' yet
Jan 12 11:20:46.576: INFO: Waiting for pod pod-94e41fca-b961-11e5-ba19-000c29facd78 in namespace 'e2e-tests-emptydir-ewumf' status to be 'success or failure'(found phase: "Pending", readiness: false) (7.038945ms elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 172.24.114.32 pod pod-94e41fca-b961-11e5-ba19-000c29facd78 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-rw-rw-
[AfterEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:20:48.609: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:20:48.621: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:20:48.622: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:20:48.622: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:20:48.622: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:20:48.622: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:20:48.622: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-ewumf" for this suite.
• [SLOW TEST:9.118 seconds]
EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (non-root,0666,tmpfs) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:62
------------------------------
Kubectl client Kubectl label
should update the label on a resource [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:676
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
Jan 12 11:20:53.662: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-pprm9
Jan 12 11:20:53.665: INFO: Get service account default in ns e2e-tests-kubectl-pprm9 failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:20:55.669: INFO: Service account default in ns e2e-tests-kubectl-pprm9 with secrets found. (2.006218344s)
[BeforeEach] Kubectl label
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:652
STEP: creating the pod
Jan 12 11:20:55.669: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config create -f /home/gulfstream/repos/kubernetes/docs/user-guide/pod.yaml --namespace=e2e-tests-kubectl-pprm9'
Jan 12 11:20:55.938: INFO: pod "nginx" created
Jan 12 11:20:55.938: INFO: Waiting up to 5m0s for the following 1 pods to be running and ready: [nginx]
Jan 12 11:20:55.938: INFO: Waiting up to 5m0s for pod nginx status to be running and ready
Jan 12 11:20:55.944: INFO: Waiting for pod nginx in namespace 'e2e-tests-kubectl-pprm9' status to be 'running and ready'(found phase: "Pending", readiness: false) (5.739836ms elapsed)
Jan 12 11:20:57.948: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [nginx]
[It] should update the label on a resource [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:676
STEP: adding the label testing-label with value testing-label-value to a pod
Jan 12 11:20:57.948: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config label pods nginx testing-label=testing-label-value --namespace=e2e-tests-kubectl-pprm9'
Jan 12 11:20:58.182: INFO: pod "nginx" labeled
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan 12 11:20:58.182: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pod nginx -L testing-label --namespace=e2e-tests-kubectl-pprm9'
Jan 12 11:20:58.408: INFO: NAME READY STATUS RESTARTS AGE TESTING-LABEL
nginx 1/1 Running 0 <invalid> testing-label-value
STEP: removing the label testing-label of a pod
Jan 12 11:20:58.408: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config label pods nginx testing-label- --namespace=e2e-tests-kubectl-pprm9'
Jan 12 11:20:58.665: INFO: pod "nginx" labeled
STEP: verifying the pod doesn't have the label testing-label
Jan 12 11:20:58.665: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pod nginx -L testing-label --namespace=e2e-tests-kubectl-pprm9'
Jan 12 11:20:58.849: INFO: NAME READY STATUS RESTARTS AGE TESTING-LABEL
nginx 1/1 Running 0 <invalid> <none>
[AfterEach] Kubectl label
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:655
STEP: using delete to clean up resources
Jan 12 11:20:58.849: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config stop --grace-period=0 -f /home/gulfstream/repos/kubernetes/docs/user-guide/pod.yaml --namespace=e2e-tests-kubectl-pprm9'
Jan 12 11:20:59.082: INFO: pod "nginx" deleted
Jan 12 11:20:59.083: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-pprm9'
Jan 12 11:20:59.285: INFO:
Jan 12 11:20:59.285: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-pprm9 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 12 11:20:59.523: INFO:
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-pprm9
• [SLOW TEST:10.889 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:922
Kubectl label
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:677
should update the label on a resource [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:676
------------------------------
Pods
should *not* be restarted with a /healthz http liveness probe [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:599
[BeforeEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:21:04.550: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-gxj0q
Jan 12 11:21:04.555: INFO: Get service account default in ns e2e-tests-pods-gxj0q failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:21:06.557: INFO: Service account default in ns e2e-tests-pods-gxj0q with secrets found. (2.006563226s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:21:06.557: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-gxj0q
Jan 12 11:21:06.559: INFO: Service account default in ns e2e-tests-pods-gxj0q with secrets found. (2.238393ms)
[It] should *not* be restarted with a /healthz http liveness probe [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:599
STEP: Creating pod liveness-http in namespace e2e-tests-pods-gxj0q
Jan 12 11:21:06.588: INFO: Waiting up to 5m0s for pod liveness-http status to be !pending
Jan 12 11:21:06.615: INFO: Waiting for pod liveness-http in namespace 'e2e-tests-pods-gxj0q' status to be '!pending'(found phase: "Pending", readiness: false) (27.394722ms elapsed)
Jan 12 11:21:08.621: INFO: Saw pod 'liveness-http' in namespace 'e2e-tests-pods-gxj0q' out of pending state (found '"Running"')
STEP: Started pod liveness-http in namespace e2e-tests-pods-gxj0q
STEP: checking the pod's current state and verifying that restartCount is present
STEP: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:23:09.319: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:23:09.330: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:23:09.330: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:23:09.330: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:23:09.330: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:23:09.330: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:23:09.331: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-gxj0q" for this suite.
• [SLOW TEST:129.815 seconds]
Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:774
should *not* be restarted with a /healthz http liveness probe [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:599
------------------------------
SS
------------------------------
EmptyDir volumes
should support (root,0777,tmpfs) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:54
[BeforeEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:23:14.367: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-yls2c
Jan 12 11:23:14.370: INFO: Get service account default in ns e2e-tests-emptydir-yls2c failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:23:16.374: INFO: Service account default in ns e2e-tests-emptydir-yls2c with secrets found. (2.007598225s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:23:16.374: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-yls2c
Jan 12 11:23:16.376: INFO: Service account default in ns e2e-tests-emptydir-yls2c with secrets found. (1.946706ms)
[It] should support (root,0777,tmpfs) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:54
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 12 11:23:16.385: INFO: Waiting up to 5m0s for pod pod-ee302d6f-b961-11e5-ba19-000c29facd78 status to be success or failure
Jan 12 11:23:16.389: INFO: No Status.Info for container 'test-container' in pod 'pod-ee302d6f-b961-11e5-ba19-000c29facd78' yet
Jan 12 11:23:16.389: INFO: Waiting for pod pod-ee302d6f-b961-11e5-ba19-000c29facd78 in namespace 'e2e-tests-emptydir-yls2c' status to be 'success or failure'(found phase: "Pending", readiness: false) (3.561075ms elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 172.24.114.32 pod pod-ee302d6f-b961-11e5-ba19-000c29facd78 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rwxrwxrwx
[AfterEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:23:18.427: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:23:18.437: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:23:18.437: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:23:18.437: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:23:18.437: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:23:18.437: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:23:18.437: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-yls2c" for this suite.
• [SLOW TEST:9.098 seconds]
EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (root,0777,tmpfs) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:54
------------------------------
SS
------------------------------
SchedulerPredicates
validates that NodeSelector is respected [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:389
[BeforeEach] SchedulerPredicates
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:179
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
Jan 12 11:23:23.466: INFO: Waiting for terminating namespaces to be deleted...
Jan 12 11:23:23.507: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-sched-pred-aeb0j
Jan 12 11:23:23.522: INFO: Service account default in ns e2e-tests-sched-pred-aeb0j had 0 secrets, ignoring for 2s: <nil>
Jan 12 11:23:25.546: INFO: Service account default in ns e2e-tests-sched-pred-aeb0j with secrets found. (2.038781881s)
[It] validates that NodeSelector is respected [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:389
STEP: Trying to schedule Pod with nonempty NodeSelector.
Jan 12 11:23:25.585: INFO: Sleeping 10 seconds and crossing our fingers that scheduler will run in that time.
STEP: Removing all pods in namespace e2e-tests-sched-pred-aeb0j
[AfterEach] SchedulerPredicates
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:193
STEP: Destroying namespace for this suite e2e-tests-sched-pred-aeb0j
• [SLOW TEST:17.253 seconds]
SchedulerPredicates
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:390
validates that NodeSelector is respected [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:389
------------------------------
SSS
------------------------------
Kubectl client Kubectl run pod
should create a pod from an image when restart is Never [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:868
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
Jan 12 11:23:40.718: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-8g69n
Jan 12 11:23:40.722: INFO: Get service account default in ns e2e-tests-kubectl-8g69n failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:23:42.727: INFO: Service account default in ns e2e-tests-kubectl-8g69n with secrets found. (2.009045073s)
[BeforeEach] Kubectl run pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:826
[It] should create a pod from an image when restart is Never [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:868
STEP: running the image nginx
Jan 12 11:23:42.727: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config run e2e-test-nginx-pod --restart=Never --image=nginx --namespace=e2e-tests-kubectl-8g69n'
Jan 12 11:23:43.008: INFO: pod "e2e-test-nginx-pod" created
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] Kubectl run pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:830
Jan 12 11:23:43.013: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config stop pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-8g69n'
Jan 12 11:23:43.268: INFO: pod "e2e-test-nginx-pod" deleted
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-8g69n
• [SLOW TEST:7.583 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:922
Kubectl run pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:870
should create a pod from an image when restart is Never [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:868
------------------------------
SSSSSS
------------------------------
EmptyDir volumes
should support (root,0644,default) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:74
[BeforeEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:23:48.302: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-35hiy
Jan 12 11:23:48.307: INFO: Get service account default in ns e2e-tests-emptydir-35hiy failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:23:50.311: INFO: Service account default in ns e2e-tests-emptydir-35hiy with secrets found. (2.008968778s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:23:50.311: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-35hiy
Jan 12 11:23:50.315: INFO: Service account default in ns e2e-tests-emptydir-35hiy with secrets found. (4.314015ms)
[It] should support (root,0644,default) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:74
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 12 11:23:50.346: INFO: Waiting up to 5m0s for pod pod-026ae277-b962-11e5-ba19-000c29facd78 status to be success or failure
Jan 12 11:23:50.373: INFO: No Status.Info for container 'test-container' in pod 'pod-026ae277-b962-11e5-ba19-000c29facd78' yet
Jan 12 11:23:50.373: INFO: Waiting for pod pod-026ae277-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-emptydir-35hiy' status to be 'success or failure'(found phase: "Pending", readiness: false) (26.52719ms elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 172.24.114.32 pod pod-026ae277-b962-11e5-ba19-000c29facd78 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-r--r--
[AfterEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:23:52.471: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:23:52.477: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:23:52.477: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:23:52.477: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:23:52.477: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:23:52.477: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:23:52.477: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-35hiy" for this suite.
• [SLOW TEST:9.206 seconds]
EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (root,0644,default) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:74
------------------------------
SSS
------------------------------
Proxy version v1
should proxy logs on node with explicit kubelet port [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:56
[BeforeEach] version v1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:23:57.506: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-6myti
Jan 12 11:23:57.510: INFO: Get service account default in ns e2e-tests-proxy-6myti failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:23:59.514: INFO: Service account default in ns e2e-tests-proxy-6myti with secrets found. (2.008124622s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:23:59.514: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-6myti
Jan 12 11:23:59.516: INFO: Service account default in ns e2e-tests-proxy-6myti with secrets found. (2.445305ms)
[It] should proxy logs on node with explicit kubelet port [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:56
Jan 12 11:23:59.524: INFO: /api/v1/proxy/nodes/172.24.114.31:10250/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 3.45299ms)
Jan 12 11:23:59.528: INFO: /api/v1/proxy/nodes/172.24.114.31:10250/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 3.889575ms)
Jan 12 11:23:59.532: INFO: /api/v1/proxy/nodes/172.24.114.31:10250/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 3.130233ms)
Jan 12 11:23:59.535: INFO: /api/v1/proxy/nodes/172.24.114.31:10250/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 2.966693ms)
Jan 12 11:23:59.538: INFO: /api/v1/proxy/nodes/172.24.114.31:10250/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 3.518168ms)
Jan 12 11:23:59.541: INFO: /api/v1/proxy/nodes/172.24.114.31:10250/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 2.943754ms)
Jan 12 11:23:59.545: INFO: /api/v1/proxy/nodes/172.24.114.31:10250/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 3.403877ms)
Jan 12 11:23:59.706: INFO: /api/v1/proxy/nodes/172.24.114.31:10250/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 161.04774ms)
Jan 12 11:23:59.905: INFO: /api/v1/proxy/nodes/172.24.114.31:10250/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 199.195262ms)
Jan 12 11:24:00.105: INFO: /api/v1/proxy/nodes/172.24.114.31:10250/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 200.270637ms)
Jan 12 11:24:00.304: INFO: /api/v1/proxy/nodes/172.24.114.31:10250/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 198.956077ms)
Jan 12 11:24:00.522: INFO: /api/v1/proxy/nodes/172.24.114.31:10250/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 217.51209ms)
Jan 12 11:24:00.709: INFO: /api/v1/proxy/nodes/172.24.114.31:10250/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 186.64684ms)
Jan 12 11:24:00.904: INFO: /api/v1/proxy/nodes/172.24.114.31:10250/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 195.657321ms)
Jan 12 11:24:01.128: INFO: /api/v1/proxy/nodes/172.24.114.31:10250/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 223.998112ms)
Jan 12 11:24:01.305: INFO: /api/v1/proxy/nodes/172.24.114.31:10250/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 175.914093ms)
Jan 12 11:24:01.505: INFO: /api/v1/proxy/nodes/172.24.114.31:10250/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 200.052876ms)
Jan 12 11:24:01.705: INFO: /api/v1/proxy/nodes/172.24.114.31:10250/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 200.473964ms)
Jan 12 11:24:01.905: INFO: /api/v1/proxy/nodes/172.24.114.31:10250/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 200.000905ms)
Jan 12 11:24:02.105: INFO: /api/v1/proxy/nodes/172.24.114.31:10250/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 199.643446ms)
[AfterEach] version v1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:24:02.105: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:24:02.306: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:24:02.306: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:24:02.306: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:24:02.306: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:24:02.306: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:24:02.306: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-proxy-6myti" for this suite.
• [SLOW TEST:5.405 seconds]
Proxy
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:41
version v1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:40
should proxy logs on node with explicit kubelet port [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:56
------------------------------
S
------------------------------
Pods
should be updated [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:373
[BeforeEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:24:02.909: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-ifj9j
Jan 12 11:24:02.915: INFO: Get service account default in ns e2e-tests-pods-ifj9j failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:24:04.918: INFO: Service account default in ns e2e-tests-pods-ifj9j with secrets found. (2.009310958s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:24:04.918: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-ifj9j
Jan 12 11:24:04.921: INFO: Service account default in ns e2e-tests-pods-ifj9j with secrets found. (2.42808ms)
[It] should be updated [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:373
STEP: creating the pod
STEP: submitting the pod to kubernetes
Jan 12 11:24:04.931: INFO: Waiting up to 5m0s for pod pod-update-0b1f7d4a-b962-11e5-ba19-000c29facd78 status to be running
Jan 12 11:24:04.937: INFO: Waiting for pod pod-update-0b1f7d4a-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-pods-ifj9j' status to be 'running'(found phase: "Pending", readiness: false) (5.925982ms elapsed)
Jan 12 11:24:06.942: INFO: Waiting for pod pod-update-0b1f7d4a-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-pods-ifj9j' status to be 'running'(found phase: "Pending", readiness: false) (2.010950387s elapsed)
Jan 12 11:24:08.947: INFO: Waiting for pod pod-update-0b1f7d4a-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-pods-ifj9j' status to be 'running'(found phase: "Pending", readiness: false) (4.015874339s elapsed)
Jan 12 11:24:10.951: INFO: Waiting for pod pod-update-0b1f7d4a-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-pods-ifj9j' status to be 'running'(found phase: "Pending", readiness: false) (6.020480082s elapsed)
Jan 12 11:24:12.956: INFO: Waiting for pod pod-update-0b1f7d4a-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-pods-ifj9j' status to be 'running'(found phase: "Pending", readiness: false) (8.025046076s elapsed)
Jan 12 11:24:14.961: INFO: Waiting for pod pod-update-0b1f7d4a-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-pods-ifj9j' status to be 'running'(found phase: "Pending", readiness: false) (10.03016874s elapsed)
Jan 12 11:24:17.008: INFO: Waiting for pod pod-update-0b1f7d4a-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-pods-ifj9j' status to be 'running'(found phase: "Pending", readiness: false) (12.077484357s elapsed)
Jan 12 11:24:19.012: INFO: Waiting for pod pod-update-0b1f7d4a-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-pods-ifj9j' status to be 'running'(found phase: "Pending", readiness: false) (14.081264s elapsed)
Jan 12 11:24:21.016: INFO: Waiting for pod pod-update-0b1f7d4a-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-pods-ifj9j' status to be 'running'(found phase: "Pending", readiness: false) (16.085483616s elapsed)
Jan 12 11:24:23.036: INFO: Waiting for pod pod-update-0b1f7d4a-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-pods-ifj9j' status to be 'running'(found phase: "Pending", readiness: false) (18.104770901s elapsed)
Jan 12 11:24:25.042: INFO: Waiting for pod pod-update-0b1f7d4a-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-pods-ifj9j' status to be 'running'(found phase: "Pending", readiness: false) (20.110564384s elapsed)
Jan 12 11:24:27.057: INFO: Waiting for pod pod-update-0b1f7d4a-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-pods-ifj9j' status to be 'running'(found phase: "Pending", readiness: false) (22.126064461s elapsed)
Jan 12 11:24:29.061: INFO: Waiting for pod pod-update-0b1f7d4a-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-pods-ifj9j' status to be 'running'(found phase: "Pending", readiness: false) (24.129978426s elapsed)
Jan 12 11:24:31.065: INFO: Waiting for pod pod-update-0b1f7d4a-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-pods-ifj9j' status to be 'running'(found phase: "Pending", readiness: false) (26.133978732s elapsed)
Jan 12 11:24:33.069: INFO: Found pod 'pod-update-0b1f7d4a-b962-11e5-ba19-000c29facd78' on node '172.24.114.32'
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 12 11:24:33.579: INFO: Conflicting update to pod, re-get and re-update: pods "pod-update-0b1f7d4a-b962-11e5-ba19-000c29facd78" cannot be updated: the object has been modified; please apply your changes to the latest version and try again
STEP: updating the pod
Jan 12 11:24:34.092: INFO: Successfully updated pod
Jan 12 11:24:34.092: INFO: Waiting up to 5m0s for pod pod-update-0b1f7d4a-b962-11e5-ba19-000c29facd78 status to be running
Jan 12 11:24:34.097: INFO: Found pod 'pod-update-0b1f7d4a-b962-11e5-ba19-000c29facd78' on node '172.24.114.32'
STEP: verifying the updated pod is in kubernetes
Jan 12 11:24:34.101: INFO: Pod update OK
STEP: deleting the pod
[AfterEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:24:34.127: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:24:34.138: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:24:34.138: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:24:34.138: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:24:34.138: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:24:34.139: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:24:34.139: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-ifj9j" for this suite.
• [SLOW TEST:36.301 seconds]
Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:774
should be updated [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:373
------------------------------
Docker Containers
should be able to override the image's default arguments (docker cmd) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:62
[BeforeEach] Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:41
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
Jan 12 11:24:39.214: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-containers-e9dyc
Jan 12 11:24:39.219: INFO: Get service account default in ns e2e-tests-containers-e9dyc failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:24:41.223: INFO: Service account default in ns e2e-tests-containers-e9dyc with secrets found. (2.008861493s)
[It] should be able to override the image's default arguments (docker cmd) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:62
STEP: Creating a pod to test override arguments
Jan 12 11:24:41.233: INFO: Waiting up to 5m0s for pod client-containers-20c2b59f-b962-11e5-ba19-000c29facd78 status to be success or failure
Jan 12 11:24:41.241: INFO: No Status.Info for container 'test-container' in pod 'client-containers-20c2b59f-b962-11e5-ba19-000c29facd78' yet
Jan 12 11:24:41.241: INFO: Waiting for pod client-containers-20c2b59f-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-containers-e9dyc' status to be 'success or failure'(found phase: "Pending", readiness: false) (7.061219ms elapsed)
Jan 12 11:24:43.259: INFO: Nil State.Terminated for container 'test-container' in pod 'client-containers-20c2b59f-b962-11e5-ba19-000c29facd78' in namespace 'e2e-tests-containers-e9dyc' so far
Jan 12 11:24:43.259: INFO: Waiting for pod client-containers-20c2b59f-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-containers-e9dyc' status to be 'success or failure'(found phase: "Running", readiness: false) (2.025310584s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 172.24.114.32 pod client-containers-20c2b59f-b962-11e5-ba19-000c29facd78 container test-container: <nil>
STEP: Successfully fetched pod logs:[/ep override arguments]
[AfterEach] Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:47
• [SLOW TEST:11.133 seconds]
Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:84
should be able to override the image's default arguments (docker cmd) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:62
------------------------------
S
------------------------------
Kubectl client Kubectl patch
should add annotations for pods in rc [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:761
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
Jan 12 11:24:50.346: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-mofxv
Jan 12 11:24:50.350: INFO: Get service account default in ns e2e-tests-kubectl-mofxv failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:24:52.363: INFO: Service account default in ns e2e-tests-kubectl-mofxv with secrets found. (2.017046439s)
[It] should add annotations for pods in rc [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:761
STEP: creating Redis RC
Jan 12 11:24:52.363: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config create -f /home/gulfstream/repos/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-mofxv'
Jan 12 11:24:52.633: INFO: replicationcontroller "redis-master" created
STEP: patching all pods
Jan 12 11:24:54.674: INFO: Waiting up to 5m0s for pod redis-master-uzbz1 status to be running
Jan 12 11:24:54.679: INFO: Found pod 'redis-master-uzbz1' on node '172.24.114.32'
Jan 12 11:24:54.679: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config patch pod redis-master-uzbz1 --namespace=e2e-tests-kubectl-mofxv -p {"metadata":{"annotations":{"x":"y"}}}'
Jan 12 11:24:54.915: INFO: "redis-master-uzbz1" patched
STEP: checking annotations
Jan 12 11:24:54.919: INFO: Waiting up to 5m0s for pod redis-master-uzbz1 status to be running
Jan 12 11:24:54.959: INFO: Found pod 'redis-master-uzbz1' on node '172.24.114.32'
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-mofxv
• [SLOW TEST:9.689 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:922
Kubectl patch
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:762
should add annotations for pods in rc [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:761
------------------------------
Networking
should function for intra-pod communication [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:247
[BeforeEach] Networking
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:25:00.108: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nettest-musyd
Jan 12 11:25:00.125: INFO: Service account default in ns e2e-tests-nettest-musyd had 0 secrets, ignoring for 2s: <nil>
Jan 12 11:25:02.130: INFO: Service account default in ns e2e-tests-nettest-musyd with secrets found. (2.021382138s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:25:02.130: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-nettest-musyd
Jan 12 11:25:02.134: INFO: Service account default in ns e2e-tests-nettest-musyd with secrets found. (3.940937ms)
[BeforeEach] Networking
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:52
STEP: Executing a successful http request from the external internet
[It] should function for intra-pod communication [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:247
STEP: Creating a service named "nettest" in namespace "e2e-tests-nettest-musyd"
STEP: Creating a webserver (pending) pod on each node
Jan 12 11:25:02.312: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:25:02.312: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:25:02.312: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:25:02.312: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:25:02.312: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:25:02.312: INFO: Successfully found node 172.24.114.32 readiness to be true
Jan 12 11:25:02.336: INFO: Created pod nettest-weygx on node 172.24.114.31
Jan 12 11:25:02.396: INFO: Created pod nettest-xdll4 on node 172.24.114.32
STEP: Waiting for the webserver pods to transition to Running state
Jan 12 11:25:02.396: INFO: Waiting up to 5m0s for pod nettest-weygx status to be running
Jan 12 11:25:02.410: INFO: Waiting for pod nettest-weygx in namespace 'e2e-tests-nettest-musyd' status to be 'running'(found phase: "Pending", readiness: false) (14.616603ms elapsed)
Jan 12 11:25:04.415: INFO: Found pod 'nettest-weygx' on node '172.24.114.31'
Jan 12 11:25:04.415: INFO: Waiting up to 5m0s for pod nettest-xdll4 status to be running
Jan 12 11:25:04.423: INFO: Found pod 'nettest-xdll4' on node '172.24.114.32'
STEP: Waiting for connectivity to be verified
Jan 12 11:25:06.423: INFO: About to make a proxy status call
Jan 12 11:25:06.427: INFO: Proxy status call returned in 3.864177ms
Jan 12 11:25:06.427: INFO: Attempt 0: test still running
Jan 12 11:25:08.428: INFO: About to make a proxy status call
Jan 12 11:25:08.431: INFO: Proxy status call returned in 3.267358ms
Jan 12 11:25:08.431: INFO: Attempt 1: test still running
Jan 12 11:25:10.432: INFO: About to make a proxy status call
Jan 12 11:25:10.466: INFO: Proxy status call returned in 33.918179ms
Jan 12 11:25:10.466: INFO: Attempt 2: test still running
Jan 12 11:25:12.466: INFO: About to make a proxy status call
Jan 12 11:25:12.470: INFO: Proxy status call returned in 3.545331ms
Jan 12 11:25:12.470: INFO: Attempt 3: test still running
Jan 12 11:25:14.471: INFO: About to make a proxy status call
Jan 12 11:25:14.476: INFO: Proxy status call returned in 4.889299ms
Jan 12 11:25:14.476: INFO: Passed on attempt 4. Cleaning up.
STEP: Cleaning up the webserver pods
STEP: Cleaning up the service
[AfterEach] Networking
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:25:14.663: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:25:14.696: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:25:14.696: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:25:14.696: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:25:14.696: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:25:14.696: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:25:14.696: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-nettest-musyd" for this suite.
• [SLOW TEST:19.732 seconds]
Networking
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:249
should function for intra-pod communication [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:247
------------------------------
EmptyDir volumes
should support (non-root,0644,default) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:86
[BeforeEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:25:19.768: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-v08ky
Jan 12 11:25:19.772: INFO: Get service account default in ns e2e-tests-emptydir-v08ky failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:25:21.781: INFO: Service account default in ns e2e-tests-emptydir-v08ky with secrets found. (2.013432426s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:25:21.781: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-v08ky
Jan 12 11:25:21.799: INFO: Service account default in ns e2e-tests-emptydir-v08ky with secrets found. (17.515427ms)
[It] should support (non-root,0644,default) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:86
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 12 11:25:21.805: INFO: Waiting up to 5m0s for pod pod-38f22521-b962-11e5-ba19-000c29facd78 status to be success or failure
Jan 12 11:25:21.812: INFO: No Status.Info for container 'test-container' in pod 'pod-38f22521-b962-11e5-ba19-000c29facd78' yet
Jan 12 11:25:21.812: INFO: Waiting for pod pod-38f22521-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-emptydir-v08ky' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.590227ms elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 172.24.114.32 pod pod-38f22521-b962-11e5-ba19-000c29facd78 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-r--r--
[AfterEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:25:23.847: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:25:23.863: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:25:23.863: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:25:23.863: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:25:23.863: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:25:23.863: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:25:23.863: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-v08ky" for this suite.
• [SLOW TEST:9.127 seconds]
EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (non-root,0644,default) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:86
------------------------------
Kubectl client Update Demo
should create and stop a replication controller [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:111
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
Jan 12 11:25:28.903: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-6em0m
Jan 12 11:25:28.910: INFO: Get service account default in ns e2e-tests-kubectl-6em0m failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:25:30.914: INFO: Service account default in ns e2e-tests-kubectl-6em0m with secrets found. (2.010345478s)
[BeforeEach] Update Demo
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:104
[It] should create and stop a replication controller [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:111
STEP: creating a replication controller
Jan 12 11:25:30.914: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config create -f /home/gulfstream/repos/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-6em0m'
Jan 12 11:25:31.160: INFO: replicationcontroller "update-demo-nautilus" created
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 12 11:25:31.160: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-6em0m'
Jan 12 11:25:31.531: INFO: update-demo-nautilus-j0yrk update-demo-nautilus-tikc0
Jan 12 11:25:31.531: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods update-demo-nautilus-j0yrk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-6em0m'
Jan 12 11:25:31.751: INFO:
Jan 12 11:25:31.751: INFO: update-demo-nautilus-j0yrk is created but not running
Jan 12 11:25:36.751: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} --api-version=v1 -l name=update-demo --namespace=e2e-tests-kubectl-6em0m'
Jan 12 11:25:37.117: INFO: update-demo-nautilus-j0yrk update-demo-nautilus-tikc0
Jan 12 11:25:37.118: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods update-demo-nautilus-j0yrk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-6em0m'
Jan 12 11:25:37.312: INFO: true
Jan 12 11:25:37.312: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods update-demo-nautilus-j0yrk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-6em0m'
Jan 12 11:25:37.518: INFO: gcr.io/google_containers/update-demo:nautilus
Jan 12 11:25:37.518: INFO: validating pod update-demo-nautilus-j0yrk
Jan 12 11:25:37.524: INFO: got data: {
"image": "nautilus.jpg"
}
Jan 12 11:25:37.524: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 12 11:25:37.524: INFO: update-demo-nautilus-j0yrk is verified up and running
Jan 12 11:25:37.524: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods update-demo-nautilus-tikc0 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-6em0m'
Jan 12 11:25:37.745: INFO: true
Jan 12 11:25:37.745: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods update-demo-nautilus-tikc0 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --api-version=v1 --namespace=e2e-tests-kubectl-6em0m'
Jan 12 11:25:37.995: INFO: gcr.io/google_containers/update-demo:nautilus
Jan 12 11:25:37.995: INFO: validating pod update-demo-nautilus-tikc0
Jan 12 11:25:38.005: INFO: got data: {
"image": "nautilus.jpg"
}
Jan 12 11:25:38.005: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 12 11:25:38.005: INFO: update-demo-nautilus-tikc0 is verified up and running
STEP: using delete to clean up resources
Jan 12 11:25:38.005: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config stop --grace-period=0 -f /home/gulfstream/repos/kubernetes/docs/user-guide/update-demo/nautilus-rc.yaml --namespace=e2e-tests-kubectl-6em0m'
Jan 12 11:25:40.298: INFO: replicationcontroller "update-demo-nautilus" deleted
Jan 12 11:25:40.298: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-6em0m'
Jan 12 11:25:40.492: INFO:
Jan 12 11:25:40.492: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-6em0m -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 12 11:25:40.740: INFO:
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-6em0m
• [SLOW TEST:16.877 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:922
Update Demo
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:136
should create and stop a replication controller [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:111
------------------------------
SSSS
------------------------------
ServiceAccounts
should mount an API token into pods [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:93
[BeforeEach] ServiceAccounts
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:25:45.772: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-svcaccounts-07nnz
Jan 12 11:25:45.777: INFO: Get service account default in ns e2e-tests-svcaccounts-07nnz failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:25:47.780: INFO: Service account default in ns e2e-tests-svcaccounts-07nnz with secrets found. (2.008231823s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:25:47.780: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-svcaccounts-07nnz
Jan 12 11:25:47.783: INFO: Service account default in ns e2e-tests-svcaccounts-07nnz with secrets found. (2.14224ms)
[It] should mount an API token into pods [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:93
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Jan 12 11:25:48.314: INFO: Waiting up to 5m0s for pod pod-service-account-48bea683-b962-11e5-ba19-000c29facd78 status to be success or failure
Jan 12 11:25:48.318: INFO: No Status.Info for container 'token-test' in pod 'pod-service-account-48bea683-b962-11e5-ba19-000c29facd78' yet
Jan 12 11:25:48.318: INFO: Waiting for pod pod-service-account-48bea683-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-svcaccounts-07nnz' status to be 'success or failure'(found phase: "Pending", readiness: false) (3.95258ms elapsed)
STEP: Saw pod success
Jan 12 11:25:50.327: INFO: Waiting up to 5m0s for pod pod-service-account-48bea683-b962-11e5-ba19-000c29facd78 status to be success or failure
STEP: Saw pod success
STEP: Trying to get logs from node 172.24.114.32 pod pod-service-account-48bea683-b962-11e5-ba19-000c29facd78 container token-test: <nil>
STEP: Successfully fetched pod logs:content of file "/var/run/secrets/kubernetes.io/serviceaccount/token": eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJlMmUtdGVzdHMtc3ZjYWNjb3VudHMtMDdubnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiZGVmYXVsdC10b2tlbi0zZ2d2NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiOThjNjcwZTgtYjk2Mi0xMWU1LWEyMTMtMDgwMDI3ZGM4Y2YwIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmUyZS10ZXN0cy1zdmNhY2NvdW50cy0wN25uejpkZWZhdWx0In0.RZP2aNIWv9kqhAtW0EwGHF4VjLR0qky8pJ2dNlc1hNW9Kj80XH6dGLCHgP52fXJQWAe_LYnIPTn5Tt5JpCzxNTIuDNz0RYVOW-cnPJG0-rxH9u1qz2awhCtx2z9VeR0IfkLJh7n0YBoCvplxKN1UKqmPHQnEYlUlc5R9hJ42A3pGWbUQkhZvtK19fthrBBkDkiTQb-SUWQ0bmWSjHNfdEUZUWJ2KkVjq5D0Bbwpd-kQX7Y8iSN_2vmE6y4PfuC8aIE7L2Yp7IsCKqY3lHFfGi168U4hjnbqY4WAiXJdPK3KmvsB6_o-P3DdfmrYSpvNgN_AuLmKuYmm9HL0BzO-Acg
STEP: Creating a pod to test consume service account root CA
Jan 12 11:25:50.382: INFO: Waiting up to 5m0s for pod pod-service-account-48bea683-b962-11e5-ba19-000c29facd78 status to be success or failure
Jan 12 11:25:50.389: INFO: No Status.Info for container 'token-test' in pod 'pod-service-account-48bea683-b962-11e5-ba19-000c29facd78' yet
Jan 12 11:25:50.389: INFO: Waiting for pod pod-service-account-48bea683-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-svcaccounts-07nnz' status to be 'success or failure'(found phase: "Pending", readiness: false) (7.193646ms elapsed)
STEP: Saw pod success
Jan 12 11:25:52.393: INFO: Waiting up to 5m0s for pod pod-service-account-48bea683-b962-11e5-ba19-000c29facd78 status to be success or failure
STEP: Saw pod success
STEP: Trying to get logs from node 172.24.114.32 pod pod-service-account-48bea683-b962-11e5-ba19-000c29facd78 container root-ca-test: <nil>
STEP: Successfully fetched pod logs:content of file "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt": -----BEGIN CERTIFICATE-----
MIIC9zCCAd+gAwIBAgIJAOqvjpgYUEePMA0GCSqGSIb3DQEBCwUAMBIxEDAOBgNV
BAMMB2t1YmUtY2EwHhcNMTYwMTExMTcyNDQ2WhcNNDMwNTI5MTcyNDQ2WjASMRAw
DgYDVQQDDAdrdWJlLWNhMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA
lXRjOsZaj/6pctPgmKmFPxoIRqJrjlNXTxxveOgv11QjBlQlLLiQ/vhHsJjAtHp/
EkPGCZHD92L1c+IKL38NBRP96x3IAP5Bjo08QMtS4UYeMLsZGjphGIVtnWSur7qh
bJuaFBkKYFA8Pmyb0FtcuwwYxAd36e55Ck7xavj3ubrDAPC2sgaXhkDJoyKvFDiP
+9kNq54ALVz3yHq8nOHC8zA3P9JqNkzi+hN0L/xudBPokMamdno5/vw1BquPbKJ6
RFnrwwODG4s8b/1dO+1k7Sis3eUf3Y3JzV2PYvNXLEi3mLCACSNLE7veddGGoK44
6dq+jjs+LXOHPbG4m1uSVwIDAQABo1AwTjAdBgNVHQ4EFgQUBXRnf9KBuvC03Db+
WHZHxr9j3tAwHwYDVR0jBBgwFoAUBXRnf9KBuvC03Db+WHZHxr9j3tAwDAYDVR0T
BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAJevjUUeYlZ4VgLwaG01dspas+kOB
9jAYeW2yoF1a08Cdi8vhJ+nzpE6eDBrt7BKYm43SfySd69+V8vzN1Jndo2iZxaql
bvpC4vwrK6Z2EFehdaz3oifCqYbHEpRoERwUAdexBPGbRTSE59pRrxsCluiLnNwz
yzzatvloCGbg5WJx1pfw1apYL1vnk1T85qyh3HIHPyxD+njEO/9M+ZHm0PV7RYgp
XcMkWtyx04hbTrh+6KWpUBKSk49ceLAj6O8FtOIuS0I+pTidS+jLr6afnX7ar024
ULkysHcMZCszdz9gypIkMVH4nDt0qfhQ48VcdvHU2jYT+t3XDi8gdetBRg==
-----END CERTIFICATE-----
[AfterEach] ServiceAccounts
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:25:52.460: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:25:52.464: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:25:52.464: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:25:52.464: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:25:52.464: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:25:52.464: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:25:52.464: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-svcaccounts-07nnz" for this suite.
• [SLOW TEST:11.765 seconds]
ServiceAccounts
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:94
should mount an API token into pods [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:93
------------------------------
SS
------------------------------
Services
should provide secure master service [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:70
[BeforeEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:25:57.574: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-8kb18
Jan 12 11:25:57.609: INFO: Get service account default in ns e2e-tests-services-8kb18 failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:25:59.612: INFO: Service account default in ns e2e-tests-services-8kb18 with secrets found. (2.037987751s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:25:59.612: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-8kb18
Jan 12 11:25:59.615: INFO: Service account default in ns e2e-tests-services-8kb18 with secrets found. (2.690762ms)
[BeforeEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:54
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
[It] should provide secure master service [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:70
[AfterEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:25:59.620: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:25:59.625: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:25:59.625: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:25:59.625: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:25:59.625: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:25:59.625: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:25:59.625: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-services-8kb18" for this suite.
[AfterEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:64
• [SLOW TEST:7.111 seconds]
Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:862
should provide secure master service [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:70
------------------------------
SSS
------------------------------
Downward API volume
should provide labels and annotations files [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:94
[BeforeEach] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:26:04.658: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-downward-api-u6rdj
Jan 12 11:26:04.663: INFO: Get service account default in ns e2e-tests-downward-api-u6rdj failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:26:06.668: INFO: Service account default in ns e2e-tests-downward-api-u6rdj with secrets found. (2.009441642s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:26:06.668: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-downward-api-u6rdj
Jan 12 11:26:06.670: INFO: Service account default in ns e2e-tests-downward-api-u6rdj with secrets found. (2.463423ms)
[It] should provide labels and annotations files [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:94
STEP: Creating a pod to test downward API volume plugin
Jan 12 11:26:06.687: INFO: Waiting up to 5m0s for pod metadata-volume-53b0f6c5-b962-11e5-ba19-000c29facd78 status to be success or failure
Jan 12 11:26:06.710: INFO: No Status.Info for container 'client-container' in pod 'metadata-volume-53b0f6c5-b962-11e5-ba19-000c29facd78' yet
Jan 12 11:26:06.710: INFO: Waiting for pod metadata-volume-53b0f6c5-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-downward-api-u6rdj' status to be 'success or failure'(found phase: "Pending", readiness: false) (22.469517ms elapsed)
Jan 12 11:26:08.732: INFO: Nil State.Terminated for container 'client-container' in pod 'metadata-volume-53b0f6c5-b962-11e5-ba19-000c29facd78' in namespace 'e2e-tests-downward-api-u6rdj' so far
Jan 12 11:26:08.732: INFO: Waiting for pod metadata-volume-53b0f6c5-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-downward-api-u6rdj' status to be 'success or failure'(found phase: "Running", readiness: false) (2.044333857s elapsed)
Jan 12 11:26:10.755: INFO: Nil State.Terminated for container 'client-container' in pod 'metadata-volume-53b0f6c5-b962-11e5-ba19-000c29facd78' in namespace 'e2e-tests-downward-api-u6rdj' so far
Jan 12 11:26:10.755: INFO: Waiting for pod metadata-volume-53b0f6c5-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-downward-api-u6rdj' status to be 'success or failure'(found phase: "Running", readiness: false) (4.067387087s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 172.24.114.32 pod metadata-volume-53b0f6c5-b962-11e5-ba19-000c29facd78 container client-container: <nil>
STEP: Successfully fetched pod logs:
cluster="rack10"
builder="john-doe"
kubernetes.io/config.seen="2016-01-12T19:28:22.056506883Z"
kubernetes.io/config.source="api"metadata-volume-53b0f6c5-b962-11e5-ba19-000c29facd78
[AfterEach] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:26:12.817: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:26:12.826: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:26:12.827: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:26:12.827: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:26:12.827: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:26:12.827: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:26:12.827: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-downward-api-u6rdj" for this suite.
• [SLOW TEST:13.208 seconds]
Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:95
should provide labels and annotations files [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:94
------------------------------
S
------------------------------
Services
should serve a basic endpoint from pods [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:129
[BeforeEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:26:17.857: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-2e483
Jan 12 11:26:17.862: INFO: Get service account default in ns e2e-tests-services-2e483 failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:26:19.866: INFO: Service account default in ns e2e-tests-services-2e483 with secrets found. (2.00910721s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:26:19.866: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-services-2e483
Jan 12 11:26:19.869: INFO: Service account default in ns e2e-tests-services-2e483 with secrets found. (2.675232ms)
[BeforeEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:54
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
[It] should serve a basic endpoint from pods [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:129
STEP: creating service endpoint-test2 in namespace e2e-tests-services-2e483
STEP: waiting up to 1m0s for service endpoint-test2 in namespace e2e-tests-services-2e483 to expose endpoints map[]
Jan 12 11:26:19.912: INFO: Get endpoints failed (20.087516ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan 12 11:26:20.916: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2e483 exposes endpoints map[] (1.024058842s elapsed)
STEP: creating pod pod1 in namespace e2e-tests-services-2e483
STEP: waiting up to 1m0s for service endpoint-test2 in namespace e2e-tests-services-2e483 to expose endpoints map[pod1:[80]]
Jan 12 11:26:23.020: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2e483 exposes endpoints map[pod1:[80]] (2.061987038s elapsed)
STEP: creating pod pod2 in namespace e2e-tests-services-2e483
STEP: waiting up to 1m0s for service endpoint-test2 in namespace e2e-tests-services-2e483 to expose endpoints map[pod1:[80] pod2:[80]]
Jan 12 11:26:27.100: INFO: Unexpected endpoints: found map[ac06eb0e-b962-11e5-a213-080027dc8cf0:[80]], expected map[pod1:[80] pod2:[80]] (4.071228991s elapsed, will retry)
Jan 12 11:26:29.124: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2e483 exposes endpoints map[pod1:[80] pod2:[80]] (6.095541372s elapsed)
STEP: deleting pod pod1 in namespace e2e-tests-services-2e483
STEP: waiting up to 1m0s for service endpoint-test2 in namespace e2e-tests-services-2e483 to expose endpoints map[pod2:[80]]
Jan 12 11:26:30.215: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2e483 exposes endpoints map[pod2:[80]] (1.061252007s elapsed)
STEP: deleting pod pod2 in namespace e2e-tests-services-2e483
STEP: waiting up to 1m0s for service endpoint-test2 in namespace e2e-tests-services-2e483 to expose endpoints map[]
Jan 12 11:26:31.249: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2e483 exposes endpoints map[] (1.022970486s elapsed)
[AfterEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:26:31.301: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:26:31.310: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:26:31.310: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:26:31.310: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:26:31.310: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:26:31.310: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:26:31.310: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-services-2e483" for this suite.
[AfterEach] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:64
• [SLOW TEST:18.536 seconds]
Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:862
should serve a basic endpoint from pods [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:129
------------------------------
S
------------------------------
Secrets
should be consumable from pods [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/secrets.go:99
[BeforeEach] Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:26:36.399: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-secrets-ubiin
Jan 12 11:26:36.409: INFO: Get service account default in ns e2e-tests-secrets-ubiin failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:26:38.436: INFO: Service account default in ns e2e-tests-secrets-ubiin with secrets found. (2.036456187s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:26:38.436: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-secrets-ubiin
Jan 12 11:26:38.439: INFO: Service account default in ns e2e-tests-secrets-ubiin with secrets found. (3.216848ms)
[It] should be consumable from pods [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/secrets.go:99
STEP: Creating secret with name secret-test-66a0851e-b962-11e5-ba19-000c29facd78
STEP: Creating a pod to test consume secrets
Jan 12 11:26:38.453: INFO: Waiting up to 5m0s for pod pod-secrets-66a1b536-b962-11e5-ba19-000c29facd78 status to be success or failure
Jan 12 11:26:38.462: INFO: No Status.Info for container 'secret-test' in pod 'pod-secrets-66a1b536-b962-11e5-ba19-000c29facd78' yet
Jan 12 11:26:38.462: INFO: Waiting for pod pod-secrets-66a1b536-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-secrets-ubiin' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.949546ms elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 172.24.114.32 pod pod-secrets-66a1b536-b962-11e5-ba19-000c29facd78 container secret-test: <nil>
STEP: Successfully fetched pod logs:mode of file "/etc/secret-volume/data-1": -r--r--r--
content of file "/etc/secret-volume/data-1": value-1
STEP: Cleaning up the secret
[AfterEach] Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:26:40.540: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:26:40.568: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:26:40.568: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:26:40.568: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:26:40.568: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:26:40.568: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:26:40.568: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-secrets-ubiin" for this suite.
• [SLOW TEST:9.233 seconds]
Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/secrets.go:100
should be consumable from pods [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/secrets.go:99
------------------------------
S
------------------------------
EmptyDir volumes
should support (root,0777,default) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:82
[BeforeEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:26:45.629: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-aq8rw
Jan 12 11:26:45.632: INFO: Get service account default in ns e2e-tests-emptydir-aq8rw failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:26:47.651: INFO: Service account default in ns e2e-tests-emptydir-aq8rw with secrets found. (2.021471442s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:26:47.651: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-aq8rw
Jan 12 11:26:47.672: INFO: Service account default in ns e2e-tests-emptydir-aq8rw with secrets found. (21.514276ms)
[It] should support (root,0777,default) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:82
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 12 11:26:47.684: INFO: Waiting up to 5m0s for pod pod-6c21687f-b962-11e5-ba19-000c29facd78 status to be success or failure
Jan 12 11:26:47.691: INFO: No Status.Info for container 'test-container' in pod 'pod-6c21687f-b962-11e5-ba19-000c29facd78' yet
Jan 12 11:26:47.691: INFO: Waiting for pod pod-6c21687f-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-emptydir-aq8rw' status to be 'success or failure'(found phase: "Pending", readiness: false) (7.436856ms elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 172.24.114.32 pod pod-6c21687f-b962-11e5-ba19-000c29facd78 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rwxrwxrwx
[AfterEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:26:49.775: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:26:49.784: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:26:49.784: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:26:49.784: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:26:49.784: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:26:49.784: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:26:49.785: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-aq8rw" for this suite.
• [SLOW TEST:9.189 seconds]
EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (root,0777,default) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:82
------------------------------
EmptyDir volumes
volume on default medium should have the correct mode [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:70
[BeforeEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:26:54.815: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-tk1mt
Jan 12 11:26:54.820: INFO: Get service account default in ns e2e-tests-emptydir-tk1mt failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:26:56.825: INFO: Service account default in ns e2e-tests-emptydir-tk1mt with secrets found. (2.010039321s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:26:56.825: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-tk1mt
Jan 12 11:26:56.828: INFO: Service account default in ns e2e-tests-emptydir-tk1mt with secrets found. (2.646321ms)
[It] volume on default medium should have the correct mode [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:70
STEP: Creating a pod to test emptydir volume type on node default medium
Jan 12 11:26:56.836: INFO: Waiting up to 5m0s for pod pod-71965fd4-b962-11e5-ba19-000c29facd78 status to be success or failure
Jan 12 11:26:56.841: INFO: No Status.Info for container 'test-container' in pod 'pod-71965fd4-b962-11e5-ba19-000c29facd78' yet
Jan 12 11:26:56.841: INFO: Waiting for pod pod-71965fd4-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-emptydir-tk1mt' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.445135ms elapsed)
Jan 12 11:26:58.846: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-71965fd4-b962-11e5-ba19-000c29facd78' in namespace 'e2e-tests-emptydir-tk1mt' so far
Jan 12 11:26:58.846: INFO: Waiting for pod pod-71965fd4-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-emptydir-tk1mt' status to be 'success or failure'(found phase: "Running", readiness: false) (2.009866725s elapsed)
Jan 12 11:27:00.869: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-71965fd4-b962-11e5-ba19-000c29facd78' in namespace 'e2e-tests-emptydir-tk1mt' so far
Jan 12 11:27:00.869: INFO: Waiting for pod pod-71965fd4-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-emptydir-tk1mt' status to be 'success or failure'(found phase: "Running", readiness: false) (4.032423347s elapsed)
Jan 12 11:27:02.873: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-71965fd4-b962-11e5-ba19-000c29facd78' in namespace 'e2e-tests-emptydir-tk1mt' so far
Jan 12 11:27:02.873: INFO: Waiting for pod pod-71965fd4-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-emptydir-tk1mt' status to be 'success or failure'(found phase: "Running", readiness: false) (6.036797393s elapsed)
Jan 12 11:27:04.879: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-71965fd4-b962-11e5-ba19-000c29facd78' in namespace 'e2e-tests-emptydir-tk1mt' so far
Jan 12 11:27:04.879: INFO: Waiting for pod pod-71965fd4-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-emptydir-tk1mt' status to be 'success or failure'(found phase: "Running", readiness: false) (8.042757865s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 172.24.114.32 pod pod-71965fd4-b962-11e5-ba19-000c29facd78 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267
perms of file "/test-volume": -rwxrwxrwx
[AfterEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:27:06.913: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:27:06.925: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:27:06.925: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:27:06.925: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:27:06.926: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:27:06.926: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:27:06.926: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-tk1mt" for this suite.
• [SLOW TEST:17.148 seconds]
EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
volume on default medium should have the correct mode [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:70
------------------------------
Proxy version v1
should proxy logs on node [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:58
[BeforeEach] version v1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:27:11.963: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-cnu14
Jan 12 11:27:11.970: INFO: Get service account default in ns e2e-tests-proxy-cnu14 failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:27:13.973: INFO: Service account default in ns e2e-tests-proxy-cnu14 with secrets found. (2.010571154s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:27:13.974: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-cnu14
Jan 12 11:27:13.976: INFO: Service account default in ns e2e-tests-proxy-cnu14 with secrets found. (1.983092ms)
[It] should proxy logs on node [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:58
Jan 12 11:27:13.982: INFO: /api/v1/proxy/nodes/172.24.114.31/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 3.454814ms)
Jan 12 11:27:13.986: INFO: /api/v1/proxy/nodes/172.24.114.31/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 3.19857ms)
Jan 12 11:27:13.989: INFO: /api/v1/proxy/nodes/172.24.114.31/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 2.917323ms)
Jan 12 11:27:13.992: INFO: /api/v1/proxy/nodes/172.24.114.31/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 2.942437ms)
Jan 12 11:27:13.995: INFO: /api/v1/proxy/nodes/172.24.114.31/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 2.953951ms)
Jan 12 11:27:14.047: INFO: /api/v1/proxy/nodes/172.24.114.31/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 52.239165ms)
Jan 12 11:27:14.059: INFO: /api/v1/proxy/nodes/172.24.114.31/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 11.426157ms)
Jan 12 11:27:14.173: INFO: /api/v1/proxy/nodes/172.24.114.31/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 114.140606ms)
Jan 12 11:27:14.362: INFO: /api/v1/proxy/nodes/172.24.114.31/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 188.892327ms)
Jan 12 11:27:14.561: INFO: /api/v1/proxy/nodes/172.24.114.31/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 199.16225ms)
Jan 12 11:27:14.763: INFO: /api/v1/proxy/nodes/172.24.114.31/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 201.278961ms)
Jan 12 11:27:14.962: INFO: /api/v1/proxy/nodes/172.24.114.31/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 198.792862ms)
Jan 12 11:27:15.202: INFO: /api/v1/proxy/nodes/172.24.114.31/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 240.422493ms)
Jan 12 11:27:15.361: INFO: /api/v1/proxy/nodes/172.24.114.31/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 159.202421ms)
Jan 12 11:27:15.561: INFO: /api/v1/proxy/nodes/172.24.114.31/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 199.40013ms)
Jan 12 11:27:15.789: INFO: /api/v1/proxy/nodes/172.24.114.31/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 228.20774ms)
Jan 12 11:27:15.962: INFO: /api/v1/proxy/nodes/172.24.114.31/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 172.11502ms)
Jan 12 11:27:16.161: INFO: /api/v1/proxy/nodes/172.24.114.31/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 199.0426ms)
Jan 12 11:27:16.361: INFO: /api/v1/proxy/nodes/172.24.114.31/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 199.787803ms)
Jan 12 11:27:16.562: INFO: /api/v1/proxy/nodes/172.24.114.31/logs/: <pre>
<a href="lastlog">lastlog</a>
<a href="journal/">journal/</a>
<a href="calico/">calico/</a>... (200; 200.663185ms)
[AfterEach] version v1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:27:16.562: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:27:16.764: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:27:16.764: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:27:16.764: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:27:16.764: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:27:16.764: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:27:16.764: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-proxy-cnu14" for this suite.
• [SLOW TEST:5.404 seconds]
Proxy
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:41
version v1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:40
should proxy logs on node [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:58
------------------------------
EmptyDir volumes
should support (non-root,0666,default) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:90
[BeforeEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:27:17.372: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-wrbpv
Jan 12 11:27:17.382: INFO: Get service account default in ns e2e-tests-emptydir-wrbpv failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:27:19.385: INFO: Service account default in ns e2e-tests-emptydir-wrbpv with secrets found. (2.013570523s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:27:19.385: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-wrbpv
Jan 12 11:27:19.388: INFO: Service account default in ns e2e-tests-emptydir-wrbpv with secrets found. (2.495864ms)
[It] should support (non-root,0666,default) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:90
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 12 11:27:19.395: INFO: Waiting up to 5m0s for pod pod-7f08d78b-b962-11e5-ba19-000c29facd78 status to be success or failure
Jan 12 11:27:19.400: INFO: No Status.Info for container 'test-container' in pod 'pod-7f08d78b-b962-11e5-ba19-000c29facd78' yet
Jan 12 11:27:19.400: INFO: Waiting for pod pod-7f08d78b-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-emptydir-wrbpv' status to be 'success or failure'(found phase: "Pending", readiness: false) (5.119005ms elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 172.24.114.32 pod pod-7f08d78b-b962-11e5-ba19-000c29facd78 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": 61267
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-rw-rw-
[AfterEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:27:21.437: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:27:21.451: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:27:21.451: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:27:21.451: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:27:21.451: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:27:21.451: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:27:21.452: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-wrbpv" for this suite.
• [SLOW TEST:9.126 seconds]
EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (non-root,0666,default) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:90
------------------------------
S
------------------------------
EmptyDir volumes
should support (non-root,0644,tmpfs) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:58
[BeforeEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:27:26.497: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-4fv6q
Jan 12 11:27:26.501: INFO: Get service account default in ns e2e-tests-emptydir-4fv6q failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:27:28.505: INFO: Service account default in ns e2e-tests-emptydir-4fv6q with secrets found. (2.008418091s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:27:28.505: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-4fv6q
Jan 12 11:27:28.509: INFO: Service account default in ns e2e-tests-emptydir-4fv6q with secrets found. (3.342569ms)
[It] should support (non-root,0644,tmpfs) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:58
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 12 11:27:28.517: INFO: Waiting up to 5m0s for pod pod-847884db-b962-11e5-ba19-000c29facd78 status to be success or failure
Jan 12 11:27:28.524: INFO: No Status.Info for container 'test-container' in pod 'pod-847884db-b962-11e5-ba19-000c29facd78' yet
Jan 12 11:27:28.524: INFO: Waiting for pod pod-847884db-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-emptydir-4fv6q' status to be 'success or failure'(found phase: "Pending", readiness: false) (7.306485ms elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 172.24.114.32 pod pod-847884db-b962-11e5-ba19-000c29facd78 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-r--r--
[AfterEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:27:30.559: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:27:30.571: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:27:30.571: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:27:30.571: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:27:30.572: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:27:30.572: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:27:30.572: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-4fv6q" for this suite.
• [SLOW TEST:9.178 seconds]
EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (non-root,0644,tmpfs) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:58
------------------------------
Proxy version v1
should proxy to cadvisor [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:60
[BeforeEach] version v1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:27:35.676: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-y5pgu
Jan 12 11:27:35.681: INFO: Get service account default in ns e2e-tests-proxy-y5pgu failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:27:37.702: INFO: Service account default in ns e2e-tests-proxy-y5pgu with secrets found. (2.025959427s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:27:37.702: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-y5pgu
Jan 12 11:27:37.720: INFO: Service account default in ns e2e-tests-proxy-y5pgu with secrets found. (18.173942ms)
[It] should proxy to cadvisor [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:60
Jan 12 11:27:37.733: INFO: /api/v1/proxy/nodes/172.24.114.31:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 7.98906ms)
Jan 12 11:27:37.738: INFO: /api/v1/proxy/nodes/172.24.114.31:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 4.592079ms)
Jan 12 11:27:37.742: INFO: /api/v1/proxy/nodes/172.24.114.31:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 3.919727ms)
Jan 12 11:27:37.746: INFO: /api/v1/proxy/nodes/172.24.114.31:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 4.201271ms)
Jan 12 11:27:37.751: INFO: /api/v1/proxy/nodes/172.24.114.31:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 4.847144ms)
Jan 12 11:27:37.755: INFO: /api/v1/proxy/nodes/172.24.114.31:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 4.079297ms)
Jan 12 11:27:37.761: INFO: /api/v1/proxy/nodes/172.24.114.31:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 5.262874ms)
Jan 12 11:27:37.871: INFO: /api/v1/proxy/nodes/172.24.114.31:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 110.310615ms)
Jan 12 11:27:38.072: INFO: /api/v1/proxy/nodes/172.24.114.31:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 201.233155ms)
Jan 12 11:27:38.315: INFO: /api/v1/proxy/nodes/172.24.114.31:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 242.400738ms)
Jan 12 11:27:38.473: INFO: /api/v1/proxy/nodes/172.24.114.31:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 157.899217ms)
Jan 12 11:27:38.672: INFO: /api/v1/proxy/nodes/172.24.114.31:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 198.987876ms)
Jan 12 11:27:38.873: INFO: /api/v1/proxy/nodes/172.24.114.31:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 201.551863ms)
Jan 12 11:27:39.072: INFO: /api/v1/proxy/nodes/172.24.114.31:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 198.448038ms)
Jan 12 11:27:39.279: INFO: /api/v1/proxy/nodes/172.24.114.31:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 206.829297ms)
Jan 12 11:27:39.473: INFO: /api/v1/proxy/nodes/172.24.114.31:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 194.190381ms)
Jan 12 11:27:39.681: INFO: /api/v1/proxy/nodes/172.24.114.31:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 207.567264ms)
Jan 12 11:27:39.873: INFO: /api/v1/proxy/nodes/172.24.114.31:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 191.706328ms)
Jan 12 11:27:40.073: INFO: /api/v1/proxy/nodes/172.24.114.31:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 200.214751ms)
Jan 12 11:27:40.272: INFO: /api/v1/proxy/nodes/172.24.114.31:4194/containers/:
<html>
<head>
<title>cAdvisor - /</title>
<link rel="stylesheet" href="../static/... (200; 198.488599ms)
[AfterEach] version v1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:27:40.272: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:27:40.485: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:27:40.485: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:27:40.485: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:27:40.485: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:27:40.485: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:27:40.485: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-proxy-y5pgu" for this suite.
• [SLOW TEST:5.405 seconds]
Proxy
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:41
version v1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:40
should proxy to cadvisor [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:60
------------------------------
Events
should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:127
[BeforeEach] Events
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:27:41.081: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-events-kf829
Jan 12 11:27:41.085: INFO: Get service account default in ns e2e-tests-events-kf829 failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:27:43.088: INFO: Service account default in ns e2e-tests-events-kf829 with secrets found. (2.007001109s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:27:43.088: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-events-kf829
Jan 12 11:27:43.090: INFO: Service account default in ns e2e-tests-events-kf829 with secrets found. (2.067085ms)
[It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:127
STEP: creating the pod
STEP: submitting the pod to kubernetes
Jan 12 11:27:43.096: INFO: Waiting up to 5m0s for pod send-events-8d2982bf-b962-11e5-ba19-000c29facd78 status to be running
Jan 12 11:27:43.102: INFO: Waiting for pod send-events-8d2982bf-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-events-kf829' status to be 'running'(found phase: "Pending", readiness: false) (6.134243ms elapsed)
Jan 12 11:27:45.108: INFO: Found pod 'send-events-8d2982bf-b962-11e5-ba19-000c29facd78' on node '172.24.114.32'
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
&{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:send-events-8d2982bf-b962-11e5-ba19-000c29facd78 GenerateName: Namespace:e2e-tests-events-kf829 SelfLink:/api/v1/namespaces/e2e-tests-events-kf829/pods/send-events-8d2982bf-b962-11e5-ba19-000c29facd78 UID:d930ad3e-b962-11e5-a213-080027dc8cf0 ResourceVersion:22455 Generation:0 CreationTimestamp:2016-01-12 11:29:50 -0800 PST DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[time:90972512 name:foo] Annotations:map[]} Spec:{Volumes:[{Name:default-token-a4hpi VolumeSource:{HostPath:<nil> EmptyDir:<nil> GCEPersistentDisk:<nil> AWSElasticBlockStore:<nil> GitRepo:<nil> Secret:0xc208b97e00 NFS:<nil> ISCSI:<nil> Glusterfs:<nil> PersistentVolumeClaim:<nil> RBD:<nil> Cinder:<nil> CephFS:<nil> Flocker:<nil> DownwardAPI:<nil> FC:<nil>}}] Containers:[{Name:p Image:gcr.io/google_containers/serve_hostname:1.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:80 Protocol:TCP HostIP:}] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-a4hpi ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount}] LivenessProbe:<nil> ReadinessProbe:<nil> Lifecycle:<nil> TerminationMessagePath:/dev/termination-log ImagePullPolicy:IfNotPresent SecurityContext:<nil> Stdin:false StdinOnce:false TTY:false}] RestartPolicy:Always TerminationGracePeriodSeconds:0xc208b97e30 ActiveDeadlineSeconds:<nil> DNSPolicy:ClusterFirst NodeSelector:map[] ServiceAccountName:default NodeName:172.24.114.32 HostNetwork:false HostPID:false HostIPC:false ImagePullSecrets:[]} Status:{Phase:Running Conditions:[{Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:0001-01-01 00:00:00 +0000 UTC Reason: Message:}] Message: Reason: HostIP:172.24.114.32 PodIP:192.168.0.65 StartTime:2016-01-12 11:29:50 -0800 PST ContainerStatuses:[{Name:p State:{Waiting:<nil> Running:0xc208ba6c80 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/serve_hostname:1.1 ImageID:docker://00619279d4083019321e4865829a65a550a23c677d76cbb44274ade0d92ca7a9 ContainerID:docker://43e81f3c0ec954ea3b82760ee2b41ae5115fa047e9f72697d9d5ae2d2e99110f}]}}
STEP: checking for scheduler event about the pod
Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] Events
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:27:49.195: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:27:49.203: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:27:49.203: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:27:49.203: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:27:49.203: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:27:49.203: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:27:49.203: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-events-kf829" for this suite.
• [SLOW TEST:13.154 seconds]
Events
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:128
should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:127
------------------------------
Kubectl client Kubectl version
should check is all data is printed [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:773
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
Jan 12 11:27:54.261: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-zt291
Jan 12 11:27:54.307: INFO: Get service account default in ns e2e-tests-kubectl-zt291 failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:27:56.334: INFO: Service account default in ns e2e-tests-kubectl-zt291 with secrets found. (2.073141655s)
[It] should check is all data is printed [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:773
Jan 12 11:27:56.334: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config version'
Jan 12 11:27:56.580: INFO: Client Version: version.Info{Major:"1", Minor:"1+", GitVersion:"v1.1.5-beta.0.2+f175451d8bbe03", GitCommit:"f175451d8bbe0319805805241010e68752148314", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.2", GitCommit:"3085895b8a70a3d985e9320a098e74f545546171", GitTreeState:"clean"}
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-zt291
• [SLOW TEST:7.412 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:922
Kubectl version
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:774
should check is all data is printed [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:773
------------------------------
ReplicationController
should serve a basic image on each replica with a public image [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:38
[BeforeEach] ReplicationController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:28:01.644: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-replication-controller-ohv2r
Jan 12 11:28:01.648: INFO: Get service account default in ns e2e-tests-replication-controller-ohv2r failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:28:03.652: INFO: Service account default in ns e2e-tests-replication-controller-ohv2r with secrets found. (2.008248045s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:28:03.652: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-replication-controller-ohv2r
Jan 12 11:28:03.654: INFO: Service account default in ns e2e-tests-replication-controller-ohv2r with secrets found. (2.112917ms)
[It] should serve a basic image on each replica with a public image [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:38
STEP: Creating replication controller my-hostname-basic-996b4975-b962-11e5-ba19-000c29facd78
Jan 12 11:28:03.670: INFO: Pod name my-hostname-basic-996b4975-b962-11e5-ba19-000c29facd78: Found 0 pods out of 2
Jan 12 11:28:08.676: INFO: Pod name my-hostname-basic-996b4975-b962-11e5-ba19-000c29facd78: Found 2 pods out of 2
STEP: Ensuring each pod is running
Jan 12 11:28:08.676: INFO: Waiting up to 5m0s for pod my-hostname-basic-996b4975-b962-11e5-ba19-000c29facd78-h3v5n status to be running
Jan 12 11:28:08.679: INFO: Found pod 'my-hostname-basic-996b4975-b962-11e5-ba19-000c29facd78-h3v5n' on node '172.24.114.31'
Jan 12 11:28:08.680: INFO: Waiting up to 5m0s for pod my-hostname-basic-996b4975-b962-11e5-ba19-000c29facd78-jh0j1 status to be running
Jan 12 11:28:08.683: INFO: Found pod 'my-hostname-basic-996b4975-b962-11e5-ba19-000c29facd78-jh0j1' on node '172.24.114.32'
STEP: Trying to dial each unique pod
Jan 12 11:28:13.737: INFO: Controller my-hostname-basic-996b4975-b962-11e5-ba19-000c29facd78: Got expected result from replica 1 [my-hostname-basic-996b4975-b962-11e5-ba19-000c29facd78-h3v5n]: "my-hostname-basic-996b4975-b962-11e5-ba19-000c29facd78-h3v5n", 1 of 2 required successes so far
Jan 12 11:28:13.746: INFO: Controller my-hostname-basic-996b4975-b962-11e5-ba19-000c29facd78: Got expected result from replica 2 [my-hostname-basic-996b4975-b962-11e5-ba19-000c29facd78-jh0j1]: "my-hostname-basic-996b4975-b962-11e5-ba19-000c29facd78-jh0j1", 2 of 2 required successes so far
STEP: deleting replication controller my-hostname-basic-996b4975-b962-11e5-ba19-000c29facd78 in namespace e2e-tests-replication-controller-ohv2r
Jan 12 11:28:15.896: INFO: Deleting RC my-hostname-basic-996b4975-b962-11e5-ba19-000c29facd78 took: 2.146219488s
Jan 12 11:28:21.907: INFO: Terminating RC my-hostname-basic-996b4975-b962-11e5-ba19-000c29facd78 pods took: 6.011438203s
[AfterEach] ReplicationController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:28:21.907: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:28:21.911: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:28:21.912: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:28:21.912: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:28:21.912: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:28:21.912: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:28:21.912: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-replication-controller-ohv2r" for this suite.
• [SLOW TEST:25.300 seconds]
ReplicationController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:46
should serve a basic image on each replica with a public image [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:38
------------------------------
SSS
------------------------------
EmptyDir volumes
should support (non-root,0777,tmpfs) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:66
[BeforeEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:28:26.944: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-120q7
Jan 12 11:28:26.949: INFO: Get service account default in ns e2e-tests-emptydir-120q7 failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:28:28.974: INFO: Service account default in ns e2e-tests-emptydir-120q7 with secrets found. (2.030504486s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:28:28.975: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-120q7
Jan 12 11:28:29.022: INFO: Service account default in ns e2e-tests-emptydir-120q7 with secrets found. (47.899689ms)
[It] should support (non-root,0777,tmpfs) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:66
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 12 11:28:29.046: INFO: Waiting up to 5m0s for pod pod-a88a3290-b962-11e5-ba19-000c29facd78 status to be success or failure
Jan 12 11:28:29.144: INFO: No Status.Info for container 'test-container' in pod 'pod-a88a3290-b962-11e5-ba19-000c29facd78' yet
Jan 12 11:28:29.144: INFO: Waiting for pod pod-a88a3290-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-emptydir-120q7' status to be 'success or failure'(found phase: "Pending", readiness: false) (98.013563ms elapsed)
Jan 12 11:28:31.148: INFO: Nil State.Terminated for container 'test-container' in pod 'pod-a88a3290-b962-11e5-ba19-000c29facd78' in namespace 'e2e-tests-emptydir-120q7' so far
Jan 12 11:28:31.148: INFO: Waiting for pod pod-a88a3290-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-emptydir-120q7' status to be 'success or failure'(found phase: "Running", readiness: false) (2.102016387s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 172.24.114.32 pod pod-a88a3290-b962-11e5-ba19-000c29facd78 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rwxrwxrwx
[AfterEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:28:33.187: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:28:33.196: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:28:33.196: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:28:33.196: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:28:33.196: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:28:33.196: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:28:33.196: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-120q7" for this suite.
• [SLOW TEST:11.321 seconds]
EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (non-root,0777,tmpfs) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:66
------------------------------
Variable Expansion
should allow substituting values in a container's args [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/expansion.go:128
[BeforeEach] Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:28:38.265: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-var-expansion-i6u1a
Jan 12 11:28:38.269: INFO: Get service account default in ns e2e-tests-var-expansion-i6u1a failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:28:40.273: INFO: Service account default in ns e2e-tests-var-expansion-i6u1a with secrets found. (2.007774765s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:28:40.273: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-var-expansion-i6u1a
Jan 12 11:28:40.276: INFO: Service account default in ns e2e-tests-var-expansion-i6u1a with secrets found. (3.010746ms)
[It] should allow substituting values in a container's args [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/expansion.go:128
STEP: Creating a pod to test substitution in container's args
Jan 12 11:28:40.314: INFO: Waiting up to 5m0s for pod var-expansion-af3f69e5-b962-11e5-ba19-000c29facd78 status to be success or failure
Jan 12 11:28:40.326: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-af3f69e5-b962-11e5-ba19-000c29facd78' yet
Jan 12 11:28:40.326: INFO: Waiting for pod var-expansion-af3f69e5-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-var-expansion-i6u1a' status to be 'success or failure'(found phase: "Pending", readiness: false) (12.379336ms elapsed)
Jan 12 11:28:42.331: INFO: Nil State.Terminated for container 'dapi-container' in pod 'var-expansion-af3f69e5-b962-11e5-ba19-000c29facd78' in namespace 'e2e-tests-var-expansion-i6u1a' so far
Jan 12 11:28:42.331: INFO: Waiting for pod var-expansion-af3f69e5-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-var-expansion-i6u1a' status to be 'success or failure'(found phase: "Running", readiness: false) (2.017297843s elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 172.24.114.32 pod var-expansion-af3f69e5-b962-11e5-ba19-000c29facd78 container dapi-container: <nil>
STEP: Successfully fetched pod logs:test-value
[AfterEach] Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:28:44.368: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:28:44.379: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:28:44.379: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:28:44.379: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:28:44.379: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:28:44.379: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:28:44.379: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-var-expansion-i6u1a" for this suite.
• [SLOW TEST:11.141 seconds]
Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/expansion.go:129
should allow substituting values in a container's args [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/expansion.go:128
------------------------------
S
------------------------------
Variable Expansion
should allow substituting values in a container's command [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/expansion.go:97
[BeforeEach] Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:28:49.407: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-var-expansion-1kj3q
Jan 12 11:28:49.412: INFO: Get service account default in ns e2e-tests-var-expansion-1kj3q failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:28:51.419: INFO: Service account default in ns e2e-tests-var-expansion-1kj3q with secrets found. (2.012194226s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:28:51.419: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-var-expansion-1kj3q
Jan 12 11:28:51.422: INFO: Service account default in ns e2e-tests-var-expansion-1kj3q with secrets found. (2.542526ms)
[It] should allow substituting values in a container's command [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/expansion.go:97
STEP: Creating a pod to test substitution in container's command
Jan 12 11:28:51.430: INFO: Waiting up to 5m0s for pod var-expansion-b5e40e92-b962-11e5-ba19-000c29facd78 status to be success or failure
Jan 12 11:28:51.437: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-b5e40e92-b962-11e5-ba19-000c29facd78' yet
Jan 12 11:28:51.437: INFO: Waiting for pod var-expansion-b5e40e92-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-var-expansion-1kj3q' status to be 'success or failure'(found phase: "Pending", readiness: false) (7.389289ms elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 172.24.114.32 pod var-expansion-b5e40e92-b962-11e5-ba19-000c29facd78 container dapi-container: <nil>
STEP: Successfully fetched pod logs:test-value
[AfterEach] Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:28:53.483: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:28:53.494: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:28:53.494: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:28:53.494: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:28:53.494: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:28:53.494: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:28:53.494: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-var-expansion-1kj3q" for this suite.
• [SLOW TEST:9.170 seconds]
Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/expansion.go:129
should allow substituting values in a container's command [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/expansion.go:97
------------------------------
SSSS
------------------------------
EmptyDir volumes
should support (root,0644,tmpfs) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:46
[BeforeEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:28:58.635: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-7flz0
Jan 12 11:28:58.660: INFO: Get service account default in ns e2e-tests-emptydir-7flz0 failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:29:00.669: INFO: Service account default in ns e2e-tests-emptydir-7flz0 with secrets found. (2.034011173s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:29:00.669: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-emptydir-7flz0
Jan 12 11:29:00.730: INFO: Service account default in ns e2e-tests-emptydir-7flz0 with secrets found. (60.802854ms)
[It] should support (root,0644,tmpfs) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:46
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 12 11:29:00.743: INFO: Waiting up to 5m0s for pod pod-bb705b82-b962-11e5-ba19-000c29facd78 status to be success or failure
Jan 12 11:29:00.749: INFO: No Status.Info for container 'test-container' in pod 'pod-bb705b82-b962-11e5-ba19-000c29facd78' yet
Jan 12 11:29:00.749: INFO: Waiting for pod pod-bb705b82-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-emptydir-7flz0' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.485637ms elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 172.24.114.32 pod pod-bb705b82-b962-11e5-ba19-000c29facd78 container test-container: <nil>
STEP: Successfully fetched pod logs:mount type of "/test-volume": tmpfs
content of file "/test-volume/test-file": mount-tester new file
perms of file "/test-volume/test-file": -rw-r--r--
[AfterEach] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:29:02.801: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:29:02.811: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:29:02.811: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:29:02.811: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:29:02.811: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:29:02.811: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:29:02.811: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-emptydir-7flz0" for this suite.
• [SLOW TEST:9.282 seconds]
EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:95
should support (root,0644,tmpfs) [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:46
------------------------------
S
------------------------------
PreStop
should call prestop when killing a pod [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:149
[BeforeEach] PreStop
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:29:07.861: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-prestop-z7koo
Jan 12 11:29:07.866: INFO: Get service account default in ns e2e-tests-prestop-z7koo failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:29:09.871: INFO: Service account default in ns e2e-tests-prestop-z7koo with secrets found. (2.009862187s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:29:09.871: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-prestop-z7koo
Jan 12 11:29:09.880: INFO: Service account default in ns e2e-tests-prestop-z7koo with secrets found. (8.694042ms)
[It] should call prestop when killing a pod [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:149
STEP: Creating server pod server in namespace e2e-tests-prestop-z7koo
STEP: Waiting for pods to come up.
Jan 12 11:29:09.891: INFO: Waiting up to 5m0s for pod server status to be running
Jan 12 11:29:09.896: INFO: Waiting for pod server in namespace 'e2e-tests-prestop-z7koo' status to be 'running'(found phase: "Pending", readiness: false) (5.257705ms elapsed)
Jan 12 11:29:11.901: INFO: Found pod 'server' on node '172.24.114.32'
STEP: Creating tester pod server in namespace e2e-tests-prestop-z7koo
Jan 12 11:29:11.913: INFO: Waiting up to 5m0s for pod tester status to be running
Jan 12 11:29:11.934: INFO: Waiting for pod tester in namespace 'e2e-tests-prestop-z7koo' status to be 'running'(found phase: "Pending", readiness: false) (21.189618ms elapsed)
Jan 12 11:29:13.952: INFO: Found pod 'tester' on node '172.24.114.31'
STEP: Deleting pre-stop pod
Jan 12 11:29:18.969: INFO: Saw: {
"Hostname": "server",
"Sent": null,
"Received": {
"prestop": 1
},
"Errors": null,
"Log": [
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.",
"Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again."
],
"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] PreStop
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:29:18.982: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:29:18.990: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:29:18.990: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:29:18.990: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:29:18.990: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:29:18.990: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:29:18.990: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-prestop-z7koo" for this suite.
• [SLOW TEST:16.218 seconds]
PreStop
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:150
should call prestop when killing a pod [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:149
------------------------------
Variable Expansion
should allow composing env vars into new env vars [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/expansion.go:67
[BeforeEach] Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:29:24.079: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-var-expansion-eo17i
Jan 12 11:29:24.087: INFO: Service account default in ns e2e-tests-var-expansion-eo17i had 0 secrets, ignoring for 2s: <nil>
Jan 12 11:29:26.091: INFO: Service account default in ns e2e-tests-var-expansion-eo17i with secrets found. (2.012007263s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:29:26.091: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-var-expansion-eo17i
Jan 12 11:29:26.094: INFO: Service account default in ns e2e-tests-var-expansion-eo17i with secrets found. (2.549555ms)
[It] should allow composing env vars into new env vars [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/expansion.go:67
STEP: Creating a pod to test env composition
Jan 12 11:29:26.141: INFO: Waiting up to 5m0s for pod var-expansion-ca8e890b-b962-11e5-ba19-000c29facd78 status to be success or failure
Jan 12 11:29:26.152: INFO: No Status.Info for container 'dapi-container' in pod 'var-expansion-ca8e890b-b962-11e5-ba19-000c29facd78' yet
Jan 12 11:29:26.152: INFO: Waiting for pod var-expansion-ca8e890b-b962-11e5-ba19-000c29facd78 in namespace 'e2e-tests-var-expansion-eo17i' status to be 'success or failure'(found phase: "Pending", readiness: false) (11.20023ms elapsed)
STEP: Saw pod success
STEP: Trying to get logs from node 172.24.114.32 pod var-expansion-ca8e890b-b962-11e5-ba19-000c29facd78 container dapi-container: <nil>
STEP: Successfully fetched pod logs:KUBERNETES_PORT=tcp://10.100.0.1:443
KUBERNETES_SERVICE_PORT=443
FOOBAR=foo-value;;bar-value
HOSTNAME=var-expansion-ca8e890b-b962-11e5-ba19-000c29facd78
SHLVL=1
HOME=/root
KUBERNETES_PORT_443_TCP_ADDR=10.100.0.1
BAR=bar-value
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
FOO=foo-value
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP=tcp://10.100.0.1:443
PWD=/
KUBERNETES_SERVICE_HOST=10.100.0.1
[AfterEach] Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:29:28.252: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:29:28.274: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:29:28.274: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:29:28.274: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:29:28.274: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:29:28.274: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:29:28.274: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-var-expansion-eo17i" for this suite.
• [SLOW TEST:9.238 seconds]
Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/expansion.go:129
should allow composing env vars into new env vars [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/expansion.go:67
------------------------------
SSSSS
------------------------------
Pods
should be submitted and removed [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:291
[BeforeEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:29:33.313: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-4yz8n
Jan 12 11:29:33.319: INFO: Get service account default in ns e2e-tests-pods-4yz8n failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:29:35.324: INFO: Service account default in ns e2e-tests-pods-4yz8n with secrets found. (2.010199779s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:29:35.324: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-4yz8n
Jan 12 11:29:35.326: INFO: Service account default in ns e2e-tests-pods-4yz8n with secrets found. (2.116272ms)
[It] should be submitted and removed [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:291
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying pod deletion was observed
[AfterEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:29:35.582: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:29:35.588: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:29:35.588: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:29:35.588: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:29:35.588: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:29:35.588: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:29:35.588: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-4yz8n" for this suite.
• [SLOW TEST:7.435 seconds]
Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:774
should be submitted and removed [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:291
------------------------------
SSSS
------------------------------
Kubectl client Kubectl logs
should be able to retrieve and filter logs [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:732
[BeforeEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:89
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
Jan 12 11:29:40.767: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-kubectl-2i4ot
Jan 12 11:29:40.771: INFO: Get service account default in ns e2e-tests-kubectl-2i4ot failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:29:42.781: INFO: Service account default in ns e2e-tests-kubectl-2i4ot with secrets found. (2.014768518s)
[BeforeEach] Kubectl logs
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:691
STEP: creating an rc
Jan 12 11:29:42.782: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config create -f /home/gulfstream/repos/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-2i4ot'
Jan 12 11:29:43.034: INFO: replicationcontroller "redis-master" created
[It] should be able to retrieve and filter logs [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:732
Jan 12 11:29:45.063: INFO: Waiting up to 5m0s for pod redis-master-qpvwl status to be running
Jan 12 11:29:45.085: INFO: Found pod 'redis-master-qpvwl' on node '172.24.114.32'
STEP: checking for a matching strings
Jan 12 11:29:45.085: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config log redis-master-qpvwl redis-master --namespace=e2e-tests-kubectl-2i4ot'
Jan 12 11:29:45.301: INFO: 1:C 12 Jan 19:31:43.351 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
_._
_.-``__ ''-._
_.-`` `. `_. ''-._ Redis 3.0.6 (00000000/0) 64 bit
.-`` .-```. ```\/ _.,_ ''-._
( ' , .-` | `, ) Running in standalone mode
|`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
| `-._ `._ / _.-' | PID: 1
`-._ `-._ `-./ _.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' | http://redis.io
`-._ `-._`-.__.-'_.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' |
`-._ `-._`-.__.-'_.-' _.-'
`-._ `-.__.-' _.-'
`-._ _.-'
`-.__.-'
1:M 12 Jan 19:31:43.353 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 12 Jan 19:31:43.353 # Server started, Redis version 3.0.6
1:M 12 Jan 19:31:43.353 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 12 Jan 19:31:43.353 * The server is now ready to accept connections on port 6379
STEP: limiting log lines
Jan 12 11:29:45.301: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config log redis-master-qpvwl redis-master --namespace=e2e-tests-kubectl-2i4ot --tail=1'
Jan 12 11:29:45.553: INFO: 1:M 12 Jan 19:31:43.353 * The server is now ready to accept connections on port 6379
STEP: limiting log bytes
Jan 12 11:29:45.553: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config log redis-master-qpvwl redis-master --namespace=e2e-tests-kubectl-2i4ot --limit-bytes=1'
Jan 12 11:29:45.804: INFO: 1
STEP: exposing timestamps
Jan 12 11:29:45.804: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config log redis-master-qpvwl redis-master --namespace=e2e-tests-kubectl-2i4ot --tail=1 --timestamps'
Jan 12 11:29:46.014: INFO: 2016-01-12T19:31:43.354428581Z 1:M 12 Jan 19:31:43.353 * The server is now ready to accept connections on port 6379
STEP: restricting to a time range
Jan 12 11:29:47.515: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config log redis-master-qpvwl redis-master --namespace=e2e-tests-kubectl-2i4ot --since=1s'
Jan 12 11:29:47.752: INFO:
Jan 12 11:29:47.752: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config log redis-master-qpvwl redis-master --namespace=e2e-tests-kubectl-2i4ot --since=24h'
Jan 12 11:29:48.059: INFO: 1:C 12 Jan 19:31:43.351 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
_._
_.-``__ ''-._
_.-`` `. `_. ''-._ Redis 3.0.6 (00000000/0) 64 bit
.-`` .-```. ```\/ _.,_ ''-._
( ' , .-` | `, ) Running in standalone mode
|`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
| `-._ `._ / _.-' | PID: 1
`-._ `-._ `-./ _.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' | http://redis.io
`-._ `-._`-.__.-'_.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' |
`-._ `-._`-.__.-'_.-' _.-'
`-._ `-.__.-' _.-'
`-._ _.-'
`-.__.-'
1:M 12 Jan 19:31:43.353 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 12 Jan 19:31:43.353 # Server started, Redis version 3.0.6
1:M 12 Jan 19:31:43.353 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 12 Jan 19:31:43.353 * The server is now ready to accept connections on port 6379
[AfterEach] Kubectl logs
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:694
STEP: using delete to clean up resources
Jan 12 11:29:48.060: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config stop --grace-period=0 -f /home/gulfstream/repos/kubernetes/examples/guestbook-go/redis-master-controller.json --namespace=e2e-tests-kubectl-2i4ot'
Jan 12 11:29:50.338: INFO: replicationcontroller "redis-master" deleted
Jan 12 11:29:50.339: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-2i4ot'
Jan 12 11:29:50.545: INFO:
Jan 12 11:29:50.545: INFO: Running '/home/gulfstream/repos/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://172.24.114.18 --kubeconfig=/home/gulfstream/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-2i4ot -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 12 11:29:50.746: INFO:
[AfterEach] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:96
STEP: Destroying namespace for this suite e2e-tests-kubectl-2i4ot
• [SLOW TEST:15.021 seconds]
Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:922
Kubectl logs
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:733
should be able to retrieve and filter logs [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:732
------------------------------
SS
------------------------------
Pods
should be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:486
[BeforeEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:29:55.771: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-o642m
Jan 12 11:29:55.776: INFO: Get service account default in ns e2e-tests-pods-o642m failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:29:57.798: INFO: Service account default in ns e2e-tests-pods-o642m with secrets found. (2.026583398s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:29:57.798: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-pods-o642m
Jan 12 11:29:57.830: INFO: Service account default in ns e2e-tests-pods-o642m with secrets found. (32.361329ms)
[It] should be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:486
STEP: Creating pod liveness-exec in namespace e2e-tests-pods-o642m
Jan 12 11:29:57.846: INFO: Waiting up to 5m0s for pod liveness-exec status to be !pending
Jan 12 11:29:57.860: INFO: Waiting for pod liveness-exec in namespace 'e2e-tests-pods-o642m' status to be '!pending'(found phase: "Pending", readiness: false) (13.421136ms elapsed)
Jan 12 11:29:59.865: INFO: Saw pod 'liveness-exec' in namespace 'e2e-tests-pods-o642m' out of pending state (found '"Running"')
STEP: Started pod liveness-exec in namespace e2e-tests-pods-o642m
STEP: checking the pod's current state and verifying that restartCount is present
STEP: Initial restart count of pod liveness-exec is 0
STEP: Restart count of pod e2e-tests-pods-o642m/liveness-exec is now 1 (54.261221339s elapsed)
STEP: deleting the pod
[AfterEach] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:30:54.271: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:30:54.297: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:30:54.297: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:30:54.297: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:30:54.297: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:30:54.297: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:30:54.297: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-pods-o642m" for this suite.
• [SLOW TEST:63.575 seconds]
Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:774
should be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:486
------------------------------
S
------------------------------
P [PENDING]
Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:774
should have monotonically increasing restart count [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:566
------------------------------
SSS
------------------------------
Proxy version v1
should proxy through a service and a pod [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:218
[BeforeEach] version v1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:51
STEP: Creating a kubernetes client
>>> testContext.KubeConfig: /home/gulfstream/.kube/config
STEP: Building a namespace api object
Jan 12 11:30:59.347: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-lmlff
Jan 12 11:30:59.354: INFO: Get service account default in ns e2e-tests-proxy-lmlff failed, ignoring for 2s: serviceaccounts "default" not found
Jan 12 11:31:01.358: INFO: Service account default in ns e2e-tests-proxy-lmlff with secrets found. (2.010974826s)
STEP: Waiting for a default service account to be provisioned in namespace
Jan 12 11:31:01.358: INFO: Waiting up to 2m0s for service account default to be provisioned in ns e2e-tests-proxy-lmlff
Jan 12 11:31:01.361: INFO: Service account default in ns e2e-tests-proxy-lmlff with secrets found. (3.232618ms)
[It] should proxy through a service and a pod [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:218
STEP: creating replication controller proxy-service-1ec0u in namespace e2e-tests-proxy-lmlff
Jan 12 11:31:01.437: INFO: Created replication controller with name: proxy-service-1ec0u, namespace: e2e-tests-proxy-lmlff, replica count: 1
Jan 12 11:31:02.437: INFO: 2016-01-12 11:31:02.437736667 -0800 PST proxy-service-1ec0u Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown
Jan 12 11:31:03.438: INFO: 2016-01-12 11:31:03.438283516 -0800 PST proxy-service-1ec0u Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown
Jan 12 11:31:03.501: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 10.239924ms)
Jan 12 11:31:03.710: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:462/proxy/: tls qux (200; 18.249779ms)
Jan 12 11:31:03.897: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname1/: foo (200; 4.113566ms)
Jan 12 11:31:04.098: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/: bar (200; 4.506313ms)
Jan 12 11:31:04.298: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/: foo (200; 4.059512ms)
Jan 12 11:31:04.497: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 2.223915ms)
Jan 12 11:31:04.698: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname2/: bar (200; 3.528253ms)
Jan 12 11:31:04.902: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/rewrite... (200; 6.237172ms)
Jan 12 11:31:05.100: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/rewriteme"... (200; 3.897179ms)
Jan 12 11:31:05.324: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/... (200; 28.275086ms)
Jan 12 11:31:05.505: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 8.348814ms)
Jan 12 11:31:05.701: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 4.253681ms)
Jan 12 11:31:05.925: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:460/proxy/: tls baz (200; 27.76808ms)
Jan 12 11:31:06.099: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname1/: foo (200; 1.637802ms)
Jan 12 11:31:06.309: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname1/: tls baz (200; 10.617852ms)
Jan 12 11:31:06.505: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/: foo (200; 6.178459ms)
Jan 12 11:31:06.702: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/: bar (200; 3.516761ms)
Jan 12 11:31:06.937: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/re... (200; 38.310244ms)
Jan 12 11:31:07.107: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname2/: bar (200; 7.642198ms)
Jan 12 11:31:07.304: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname2/: tls qux (200; 3.446874ms)
Jan 12 11:31:07.538: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/re... (200; 37.548486ms)
Jan 12 11:31:07.705: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/rewrite... (200; 3.792807ms)
Jan 12 11:31:07.905: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname1/: foo (200; 4.165312ms)
Jan 12 11:31:08.145: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/: bar (200; 42.944243ms)
Jan 12 11:31:08.306: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/: foo (200; 3.309971ms)
Jan 12 11:31:08.506: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 3.594026ms)
Jan 12 11:31:08.707: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 3.627135ms)
Jan 12 11:31:08.938: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:462/proxy/: tls qux (200; 34.384322ms)
Jan 12 11:31:09.112: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname2/: bar (200; 8.223028ms)
Jan 12 11:31:09.309: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/rewrite... (200; 3.874204ms)
Jan 12 11:31:09.509: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/rewriteme"... (200; 4.055725ms)
Jan 12 11:31:09.723: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/... (200; 16.830698ms)
Jan 12 11:31:09.909: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:460/proxy/: tls baz (200; 3.335334ms)
Jan 12 11:31:10.110: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname1/: foo (200; 3.123341ms)
Jan 12 11:31:10.386: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname1/: tls baz (200; 79.20402ms)
Jan 12 11:31:10.531: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/: foo (200; 23.342107ms)
Jan 12 11:31:10.710: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/: bar (200; 2.611585ms)
Jan 12 11:31:10.912: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 3.816569ms)
Jan 12 11:31:11.112: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 3.621883ms)
Jan 12 11:31:11.313: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname2/: bar (200; 4.069106ms)
Jan 12 11:31:11.513: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname2/: tls qux (200; 3.943376ms)
Jan 12 11:31:11.713: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/re... (200; 3.949159ms)
Jan 12 11:31:11.946: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/rewrite... (200; 36.906669ms)
Jan 12 11:31:12.114: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/re... (200; 4.563319ms)
Jan 12 11:31:12.314: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname1/: foo (200; 3.113166ms)
Jan 12 11:31:12.552: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/: bar (200; 41.080983ms)
Jan 12 11:31:12.726: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/: foo (200; 14.600396ms)
Jan 12 11:31:12.915: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 3.396291ms)
Jan 12 11:31:13.116: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 3.682581ms)
Jan 12 11:31:13.318: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:462/proxy/: tls qux (200; 5.475665ms)
Jan 12 11:31:13.517: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname2/: bar (200; 3.469533ms)
Jan 12 11:31:13.718: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/rewrite... (200; 4.029582ms)
Jan 12 11:31:13.925: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/rewriteme"... (200; 11.044346ms)
Jan 12 11:31:14.118: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/... (200; 3.782064ms)
Jan 12 11:31:14.318: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname1/: foo (200; 3.39478ms)
Jan 12 11:31:14.520: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname1/: tls baz (200; 4.702618ms)
Jan 12 11:31:14.720: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/: foo (200; 3.728661ms)
Jan 12 11:31:14.921: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/: bar (200; 4.80226ms)
Jan 12 11:31:15.121: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 4.722705ms)
Jan 12 11:31:15.320: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 3.369095ms)
Jan 12 11:31:15.562: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:460/proxy/: tls baz (200; 44.462565ms)
Jan 12 11:31:15.722: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname2/: bar (200; 4.338581ms)
Jan 12 11:31:15.923: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname2/: tls qux (200; 4.135399ms)
Jan 12 11:31:16.124: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/re... (200; 4.458603ms)
Jan 12 11:31:16.324: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/rewrite... (200; 4.094264ms)
Jan 12 11:31:16.524: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/re... (200; 3.973927ms)
Jan 12 11:31:16.758: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/... (200; 37.979689ms)
Jan 12 11:31:16.925: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname2/: bar (200; 3.954276ms)
Jan 12 11:31:17.154: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/rewrite... (200; 32.314073ms)
Jan 12 11:31:17.329: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/rewriteme"... (200; 6.047755ms)
Jan 12 11:31:17.527: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/: bar (200; 3.863841ms)
Jan 12 11:31:17.729: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 4.153471ms)
Jan 12 11:31:17.928: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 3.812858ms)
Jan 12 11:31:18.148: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:460/proxy/: tls baz (200; 22.549723ms)
Jan 12 11:31:18.329: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname1/: foo (200; 3.425337ms)
Jan 12 11:31:18.530: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname1/: tls baz (200; 3.45407ms)
Jan 12 11:31:18.754: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/: foo (200; 27.554013ms)
Jan 12 11:31:18.931: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/rewrite... (200; 3.648112ms)
Jan 12 11:31:19.131: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/re... (200; 3.164359ms)
Jan 12 11:31:19.357: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname2/: bar (200; 28.632526ms)
Jan 12 11:31:19.532: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname2/: tls qux (200; 3.395877ms)
Jan 12 11:31:19.733: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/re... (200; 3.258625ms)
Jan 12 11:31:19.934: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 3.477721ms)
Jan 12 11:31:20.135: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 4.34837ms)
Jan 12 11:31:20.355: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:462/proxy/: tls qux (200; 24.642052ms)
Jan 12 11:31:20.535: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname1/: foo (200; 3.658548ms)
Jan 12 11:31:20.735: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/: bar (200; 3.498961ms)
Jan 12 11:31:20.936: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/: foo (200; 3.921514ms)
Jan 12 11:31:21.136: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname2/: bar (200; 3.950872ms)
Jan 12 11:31:21.374: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/rewrite... (200; 40.537812ms)
Jan 12 11:31:21.547: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/rewriteme"... (200; 13.251952ms)
Jan 12 11:31:21.738: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/... (200; 4.28716ms)
Jan 12 11:31:21.970: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 35.561389ms)
Jan 12 11:31:22.138: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 3.360655ms)
Jan 12 11:31:22.340: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:460/proxy/: tls baz (200; 4.221867ms)
Jan 12 11:31:22.539: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname1/: foo (200; 3.087238ms)
Jan 12 11:31:22.760: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname1/: tls baz (200; 22.373829ms)
Jan 12 11:31:22.941: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/: foo (200; 3.798192ms)
Jan 12 11:31:23.141: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/: bar (200; 3.48641ms)
Jan 12 11:31:23.342: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/re... (200; 3.856411ms)
Jan 12 11:31:23.543: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname2/: bar (200; 4.318483ms)
Jan 12 11:31:23.743: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname2/: tls qux (200; 3.385273ms)
Jan 12 11:31:23.948: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/re... (200; 8.606168ms)
Jan 12 11:31:24.145: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/rewrite... (200; 4.259465ms)
Jan 12 11:31:24.361: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 20.406194ms)
Jan 12 11:31:24.564: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:462/proxy/: tls qux (200; 22.205432ms)
Jan 12 11:31:24.764: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname1/: foo (200; 21.968601ms)
Jan 12 11:31:24.946: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/: bar (200; 3.371892ms)
Jan 12 11:31:25.154: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/: foo (200; 10.799986ms)
Jan 12 11:31:25.348: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 3.52702ms)
Jan 12 11:31:25.548: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname1/: tls baz (200; 3.498208ms)
Jan 12 11:31:25.768: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/: foo (200; 22.983788ms)
Jan 12 11:31:25.949: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/: bar (200; 3.532418ms)
Jan 12 11:31:26.187: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 40.591877ms)
Jan 12 11:31:26.354: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 6.948576ms)
Jan 12 11:31:26.552: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:460/proxy/: tls baz (200; 4.184691ms)
Jan 12 11:31:26.790: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname1/: foo (200; 41.51402ms)
Jan 12 11:31:26.953: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname2/: tls qux (200; 4.089122ms)
Jan 12 11:31:27.153: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/re... (200; 3.009229ms)
Jan 12 11:31:27.379: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/rewrite... (200; 28.642629ms)
Jan 12 11:31:27.555: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/re... (200; 4.111063ms)
Jan 12 11:31:27.757: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname2/: bar (200; 6.121001ms)
Jan 12 11:31:27.972: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/: bar (200; 21.253503ms)
Jan 12 11:31:28.156: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/: foo (200; 4.238875ms)
Jan 12 11:31:28.357: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 4.286209ms)
Jan 12 11:31:28.557: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 3.629053ms)
Jan 12 11:31:28.757: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:462/proxy/: tls qux (200; 3.497319ms)
Jan 12 11:31:28.957: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname1/: foo (200; 3.217359ms)
Jan 12 11:31:29.160: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/rewrite... (200; 4.810921ms)
Jan 12 11:31:29.361: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/rewriteme"... (200; 4.008444ms)
Jan 12 11:31:29.586: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/... (200; 29.117229ms)
Jan 12 11:31:29.761: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname2/: bar (200; 3.186089ms)
Jan 12 11:31:29.963: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 4.331264ms)
Jan 12 11:31:30.187: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 27.202049ms)
Jan 12 11:31:30.367: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:462/proxy/: tls qux (200; 5.672929ms)
Jan 12 11:31:30.565: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname1/: foo (200; 3.746957ms)
Jan 12 11:31:30.767: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/: bar (200; 4.578199ms)
Jan 12 11:31:30.991: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/: foo (200; 28.650507ms)
Jan 12 11:31:31.167: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/... (200; 3.526459ms)
Jan 12 11:31:31.367: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname2/: bar (200; 3.498191ms)
Jan 12 11:31:31.578: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/rewrite... (200; 13.749154ms)
Jan 12 11:31:31.769: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/rewriteme"... (200; 3.772255ms)
Jan 12 11:31:31.969: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/: bar (200; 3.657715ms)
Jan 12 11:31:32.198: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 31.970292ms)
Jan 12 11:31:32.370: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 3.71972ms)
Jan 12 11:31:32.570: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:460/proxy/: tls baz (200; 3.886258ms)
Jan 12 11:31:32.771: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname1/: foo (200; 3.79046ms)
Jan 12 11:31:32.973: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname1/: tls baz (200; 4.061654ms)
Jan 12 11:31:33.200: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/: foo (200; 30.211326ms)
Jan 12 11:31:33.375: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/rewrite... (200; 4.000588ms)
Jan 12 11:31:33.576: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/re... (200; 4.422485ms)
Jan 12 11:31:33.788: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname2/: bar (200; 15.58031ms)
Jan 12 11:31:33.976: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname2/: tls qux (200; 3.240254ms)
Jan 12 11:31:34.178: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/re... (200; 4.997225ms)
Jan 12 11:31:34.378: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/: bar (200; 3.912832ms)
Jan 12 11:31:34.583: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/: foo (200; 8.092271ms)
Jan 12 11:31:34.778: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 2.925355ms)
Jan 12 11:31:35.015: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 39.836867ms)
Jan 12 11:31:35.179: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:462/proxy/: tls qux (200; 3.527958ms)
Jan 12 11:31:35.380: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname1/: foo (200; 3.398374ms)
Jan 12 11:31:35.600: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/rewrite... (200; 23.809171ms)
Jan 12 11:31:35.784: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/rewriteme"... (200; 6.848787ms)
Jan 12 11:31:35.982: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/... (200; 4.955071ms)
Jan 12 11:31:36.213: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname2/: bar (200; 35.782335ms)
Jan 12 11:31:36.383: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname1/: tls baz (200; 5.771882ms)
Jan 12 11:31:36.582: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/: foo (200; 3.94575ms)
Jan 12 11:31:36.786: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/: bar (200; 7.896ms)
Jan 12 11:31:36.983: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 4.208504ms)
Jan 12 11:31:37.184: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 4.820235ms)
Jan 12 11:31:37.404: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:460/proxy/: tls baz (200; 24.461359ms)
Jan 12 11:31:37.584: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname1/: foo (200; 4.146626ms)
Jan 12 11:31:37.785: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname2/: tls qux (200; 3.834371ms)
Jan 12 11:31:37.995: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/re... (200; 13.499861ms)
Jan 12 11:31:38.186: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/rewrite... (200; 4.135794ms)
Jan 12 11:31:38.384: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/re... (200; 2.288283ms)
Jan 12 11:31:38.618: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname2/: bar (200; 35.157286ms)
Jan 12 11:31:38.787: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/rewriteme"... (200; 3.571317ms)
Jan 12 11:31:38.987: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/... (200; 3.70577ms)
Jan 12 11:31:39.191: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname2/: bar (200; 7.689878ms)
Jan 12 11:31:39.387: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/rewrite... (200; 3.603735ms)
Jan 12 11:31:39.587: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/: foo (200; 2.23486ms)
Jan 12 11:31:39.792: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/: bar (200; 7.045863ms)
Jan 12 11:31:40.005: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 20.591655ms)
Jan 12 11:31:40.222: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 36.091178ms)
Jan 12 11:31:40.390: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:460/proxy/: tls baz (200; 3.544516ms)
Jan 12 11:31:40.593: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname1/: foo (200; 5.890194ms)
Jan 12 11:31:40.804: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname1/: tls baz (200; 15.801786ms)
Jan 12 11:31:41.029: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/re... (200; 39.925197ms)
Jan 12 11:31:41.200: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/rewrite... (200; 11.046233ms)
Jan 12 11:31:41.399: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/re... (200; 10.125006ms)
Jan 12 11:31:41.631: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname2/: bar (200; 41.845816ms)
Jan 12 11:31:41.794: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname2/: tls qux (200; 3.630691ms)
Jan 12 11:31:41.995: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/: foo (200; 3.458936ms)
Jan 12 11:31:42.237: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 45.597765ms)
Jan 12 11:31:42.397: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 3.89565ms)
Jan 12 11:31:42.598: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:462/proxy/: tls qux (200; 5.150192ms)
Jan 12 11:31:42.802: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname1/: foo (200; 8.187822ms)
Jan 12 11:31:43.037: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/: bar (200; 43.475073ms)
Jan 12 11:31:43.198: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname2/: bar (200; 3.386138ms)
Jan 12 11:31:43.400: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname2/: tls qux (200; 3.992853ms)
Jan 12 11:31:43.619: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/re... (200; 22.211865ms)
Jan 12 11:31:43.801: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/rewrite... (200; 4.438315ms)
Jan 12 11:31:44.001: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/re... (200; 3.468777ms)
Jan 12 11:31:44.220: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:462/proxy/: tls qux (200; 22.119143ms)
Jan 12 11:31:44.405: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname1/: foo (200; 6.933878ms)
Jan 12 11:31:44.602: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/: bar (200; 3.419544ms)
Jan 12 11:31:44.803: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/: foo (200; 3.404852ms)
Jan 12 11:31:45.005: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 3.84603ms)
Jan 12 11:31:45.219: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 17.584389ms)
Jan 12 11:31:45.405: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname2/: bar (200; 3.310069ms)
Jan 12 11:31:45.606: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/rewrite... (200; 3.603045ms)
Jan 12 11:31:45.818: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/rewriteme"... (200; 14.914196ms)
Jan 12 11:31:46.009: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/... (200; 4.581619ms)
Jan 12 11:31:46.208: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 3.494486ms)
Jan 12 11:31:46.427: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:460/proxy/: tls baz (200; 21.898395ms)
Jan 12 11:31:46.609: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname1/: foo (200; 3.483957ms)
Jan 12 11:31:46.810: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname1/: tls baz (200; 4.119813ms)
Jan 12 11:31:47.011: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/: foo (200; 3.225397ms)
Jan 12 11:31:47.212: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/: bar (200; 3.534371ms)
Jan 12 11:31:47.413: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 4.118033ms)
Jan 12 11:31:47.633: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname1/: foo (200; 23.79463ms)
Jan 12 11:31:47.814: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname1/: tls baz (200; 3.229084ms)
Jan 12 11:31:48.014: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/: foo (200; 3.298812ms)
Jan 12 11:31:48.218: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/: bar (200; 6.502248ms)
Jan 12 11:31:48.417: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 4.954837ms)
Jan 12 11:31:48.617: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 4.229374ms)
Jan 12 11:31:48.826: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:460/proxy/: tls baz (200; 13.311767ms)
Jan 12 11:31:49.017: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname2/: bar (200; 3.315738ms)
Jan 12 11:31:49.257: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname2/: tls qux (200; 43.494583ms)
Jan 12 11:31:49.418: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/re... (200; 3.751857ms)
Jan 12 11:31:49.619: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/rewrite... (200; 3.914683ms)
Jan 12 11:31:49.821: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/re... (200; 4.977922ms)
Jan 12 11:31:50.020: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname1/: foo (200; 3.002833ms)
Jan 12 11:31:50.234: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/: bar (200; 17.097196ms)
Jan 12 11:31:50.420: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/: foo (200; 2.577801ms)
Jan 12 11:31:50.621: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 3.627781ms)
Jan 12 11:31:50.828: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 10.435114ms)
Jan 12 11:31:51.023: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:462/proxy/: tls qux (200; 4.648216ms)
Jan 12 11:31:51.224: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname2/: bar (200; 4.64416ms)
Jan 12 11:31:51.429: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/rewrite... (200; 8.49956ms)
Jan 12 11:31:51.624: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/rewriteme"... (200; 3.672759ms)
Jan 12 11:31:51.824: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/... (200; 3.232441ms)
Jan 12 11:31:52.041: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname2/: bar (200; 19.557934ms)
Jan 12 11:31:52.226: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname2/: tls qux (200; 3.444454ms)
Jan 12 11:31:52.427: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/re... (200; 3.42229ms)
Jan 12 11:31:52.659: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/rewrite... (200; 34.491856ms)
Jan 12 11:31:52.851: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/re... (200; 26.608691ms)
Jan 12 11:31:53.036: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:462/proxy/: tls qux (200; 10.786986ms)
Jan 12 11:31:53.230: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname1/: foo (200; 4.269884ms)
Jan 12 11:31:53.442: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/: bar (200; 15.699858ms)
Jan 12 11:31:53.630: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/: foo (200; 3.960129ms)
Jan 12 11:31:53.830: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 3.321202ms)
Jan 12 11:31:54.060: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 33.147492ms)
Jan 12 11:31:54.241: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname2/: bar (200; 14.08879ms)
Jan 12 11:31:54.432: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/rewrite... (200; 4.661403ms)
Jan 12 11:31:54.666: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/rewriteme"... (200; 37.709261ms)
Jan 12 11:31:54.834: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/... (200; 5.122216ms)
Jan 12 11:31:55.035: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 5.498514ms)
Jan 12 11:31:55.249: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:460/proxy/: tls baz (200; 19.622239ms)
Jan 12 11:31:55.435: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname1/: foo (200; 4.666999ms)
Jan 12 11:31:55.634: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname1/: tls baz (200; 3.559505ms)
Jan 12 11:31:55.863: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/: foo (200; 31.222317ms)
Jan 12 11:31:56.036: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/: bar (200; 3.431647ms)
Jan 12 11:31:56.238: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 5.179565ms)
Jan 12 11:31:56.453: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname2/: bar (200; 19.858404ms)
Jan 12 11:31:56.637: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname2/: tls qux (200; 3.69138ms)
Jan 12 11:31:56.838: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/re... (200; 3.890517ms)
Jan 12 11:31:57.067: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/rewrite... (200; 32.25151ms)
Jan 12 11:31:57.238: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/re... (200; 3.555717ms)
Jan 12 11:31:57.439: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname1/: foo (200; 4.049861ms)
Jan 12 11:31:57.648: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/: bar (200; 12.110742ms)
Jan 12 11:31:57.840: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/: foo (200; 4.194227ms)
Jan 12 11:31:58.068: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 31.685559ms)
Jan 12 11:31:58.242: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 5.668147ms)
Jan 12 11:31:58.441: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:462/proxy/: tls qux (200; 3.47695ms)
Jan 12 11:31:58.657: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname2/: bar (200; 19.465474ms)
Jan 12 11:31:58.842: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/rewrite... (200; 4.242604ms)
Jan 12 11:31:59.053: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/rewriteme"... (200; 15.112656ms)
Jan 12 11:31:59.242: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/... (200; 3.833963ms)
Jan 12 11:31:59.443: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname1/: foo (200; 3.581501ms)
Jan 12 11:31:59.643: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname1/: tls baz (200; 3.385829ms)
Jan 12 11:31:59.844: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/: foo (200; 4.51544ms)
Jan 12 11:32:00.044: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/: bar (200; 3.402141ms)
Jan 12 11:32:00.245: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 3.866216ms)
Jan 12 11:32:00.445: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 3.175936ms)
Jan 12 11:32:00.679: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:460/proxy/: tls baz (200; 37.423042ms)
Jan 12 11:32:00.846: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/... (200; 3.487213ms)
Jan 12 11:32:01.047: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname2/: bar (200; 4.591662ms)
Jan 12 11:32:01.258: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/rewrite... (200; 15.078464ms)
Jan 12 11:32:01.449: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/rewriteme"... (200; 5.529797ms)
Jan 12 11:32:01.647: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/: bar (200; 3.062562ms)
Jan 12 11:32:01.848: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 2.540804ms)
Jan 12 11:32:02.050: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 4.268351ms)
Jan 12 11:32:02.271: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:460/proxy/: tls baz (200; 24.378152ms)
Jan 12 11:32:02.452: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname1/: foo (200; 4.841687ms)
Jan 12 11:32:02.652: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname1/: tls baz (200; 3.37582ms)
Jan 12 11:32:02.865: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/: foo (200; 16.177258ms)
Jan 12 11:32:03.052: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/rewrite... (200; 3.587202ms)
Jan 12 11:32:03.254: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/re... (200; 4.345185ms)
Jan 12 11:32:03.492: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname2/: bar (200; 42.601014ms)
Jan 12 11:32:03.668: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname2/: tls qux (200; 18.343333ms)
Jan 12 11:32:03.854: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/re... (200; 3.56238ms)
Jan 12 11:32:04.056: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 4.831574ms)
Jan 12 11:32:04.259: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 6.726056ms)
Jan 12 11:32:04.456: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:462/proxy/: tls qux (200; 3.60472ms)
Jan 12 11:32:04.690: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname1/: foo (200; 37.330089ms)
Jan 12 11:32:04.858: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/: bar (200; 4.734404ms)
Jan 12 11:32:05.058: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/: foo (200; 4.100887ms)
Jan 12 11:32:05.277: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname1/: foo (200; 22.86329ms)
Jan 12 11:32:05.462: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname1/: tls baz (200; 6.638669ms)
Jan 12 11:32:05.678: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/: foo (200; 22.425861ms)
Jan 12 11:32:05.859: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/: bar (200; 3.412565ms)
Jan 12 11:32:06.060: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 3.475332ms)
Jan 12 11:32:06.283: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 25.165107ms)
Jan 12 11:32:06.462: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:460/proxy/: tls baz (200; 3.919064ms)
Jan 12 11:32:06.663: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname2/: bar (200; 4.644908ms)
Jan 12 11:32:06.896: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname2/: tls qux (200; 37.360585ms)
Jan 12 11:32:07.064: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/re... (200; 4.203466ms)
Jan 12 11:32:07.264: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/rewrite... (200; 3.515332ms)
Jan 12 11:32:07.468: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/re... (200; 7.834718ms)
Jan 12 11:32:07.664: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname1/: foo (200; 2.99441ms)
Jan 12 11:32:07.893: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/: bar (200; 31.909355ms)
Jan 12 11:32:08.067: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/: foo (200; 5.980423ms)
Jan 12 11:32:08.265: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 3.599107ms)
Jan 12 11:32:08.494: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 32.152765ms)
Jan 12 11:32:08.666: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:462/proxy/: tls qux (200; 3.646464ms)
Jan 12 11:32:08.866: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname2/: bar (200; 3.461101ms)
Jan 12 11:32:09.068: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/rewrite... (200; 4.418161ms)
Jan 12 11:32:09.287: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/rewriteme"... (200; 23.537832ms)
Jan 12 11:32:09.469: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/... (200; 4.427701ms)
Jan 12 11:32:09.669: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/re... (200; 3.64972ms)
Jan 12 11:32:09.872: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/rewrite... (200; 6.288811ms)
Jan 12 11:32:10.070: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/re... (200; 3.864708ms)
Jan 12 11:32:10.270: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname2/: bar (200; 3.234242ms)
Jan 12 11:32:10.471: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname2/: tls qux (200; 3.95504ms)
Jan 12 11:32:10.671: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/: foo (200; 3.286793ms)
Jan 12 11:32:10.876: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 7.32842ms)
Jan 12 11:32:11.077: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 7.899909ms)
Jan 12 11:32:11.293: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:462/proxy/: tls qux (200; 23.066173ms)
Jan 12 11:32:11.473: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname1/: foo (200; 2.81217ms)
Jan 12 11:32:11.675: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/: bar (200; 4.375229ms)
Jan 12 11:32:11.876: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/rewriteme"... (200; 4.044998ms)
Jan 12 11:32:12.076: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/... (200; 3.986268ms)
Jan 12 11:32:12.315: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname2/: bar (200; 42.712625ms)
Jan 12 11:32:12.476: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/rewrite... (200; 3.628781ms)
Jan 12 11:32:12.677: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/: foo (200; 2.896875ms)
Jan 12 11:32:12.879: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/: bar (200; 4.474361ms)
Jan 12 11:32:13.079: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 3.478497ms)
Jan 12 11:32:13.300: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 24.929191ms)
Jan 12 11:32:13.510: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:460/proxy/: tls baz (200; 34.053417ms)
Jan 12 11:32:13.680: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname1/: foo (200; 3.64306ms)
Jan 12 11:32:13.881: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname1/: tls baz (200; 3.872989ms)
Jan 12 11:32:14.106: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/: bar (200; 28.254704ms)
Jan 12 11:32:14.282: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 4.132759ms)
Jan 12 11:32:14.482: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 4.009033ms)
Jan 12 11:32:14.683: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:460/proxy/: tls baz (200; 4.044634ms)
Jan 12 11:32:14.883: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname1/: foo (200; 3.770915ms)
Jan 12 11:32:15.083: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname1/: tls baz (200; 3.174203ms)
Jan 12 11:32:15.301: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/: foo (200; 21.281ms)
Jan 12 11:32:15.486: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/rewrite... (200; 5.760988ms)
Jan 12 11:32:15.685: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/re... (200; 3.56395ms)
Jan 12 11:32:15.908: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname2/: bar (200; 26.941253ms)
Jan 12 11:32:16.085: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname2/: tls qux (200; 3.647479ms)
Jan 12 11:32:16.288: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/re... (200; 5.340944ms)
Jan 12 11:32:16.487: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 4.282218ms)
Jan 12 11:32:16.687: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 3.725336ms)
Jan 12 11:32:16.887: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:462/proxy/: tls qux (200; 3.581133ms)
Jan 12 11:32:17.088: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname1/: foo (200; 4.189105ms)
Jan 12 11:32:17.288: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/: bar (200; 3.814041ms)
Jan 12 11:32:17.509: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/: foo (200; 24.02924ms)
Jan 12 11:32:17.690: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/... (200; 4.277548ms)
Jan 12 11:32:17.890: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname2/: bar (200; 3.423772ms)
Jan 12 11:32:18.100: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/rewrite... (200; 13.020024ms)
Jan 12 11:32:18.292: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/rewriteme"... (200; 4.60479ms)
Jan 12 11:32:18.506: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname2/: bar (200; 18.039123ms)
Jan 12 11:32:18.692: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname2/: tls qux (200; 3.642669ms)
Jan 12 11:32:18.892: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/re... (200; 3.611429ms)
Jan 12 11:32:19.129: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/rewrite... (200; 39.499629ms)
Jan 12 11:32:19.294: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/re... (200; 4.258671ms)
Jan 12 11:32:19.495: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:462/proxy/: tls qux (200; 4.230683ms)
Jan 12 11:32:19.695: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname1/: foo (200; 3.772983ms)
Jan 12 11:32:19.896: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/: bar (200; 4.174088ms)
Jan 12 11:32:20.104: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/: foo (200; 12.561225ms)
Jan 12 11:32:20.296: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 3.98921ms)
Jan 12 11:32:20.514: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 21.116881ms)
Jan 12 11:32:20.697: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname2/: bar (200; 3.926088ms)
Jan 12 11:32:20.897: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/rewrite... (200; 3.712825ms)
Jan 12 11:32:21.139: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/rewriteme"... (200; 44.958052ms)
Jan 12 11:32:21.341: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/... (200; 46.523647ms)
Jan 12 11:32:21.499: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 3.356028ms)
Jan 12 11:32:21.699: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:460/proxy/: tls baz (200; 3.179518ms)
Jan 12 11:32:21.901: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname1/: foo (200; 3.356767ms)
Jan 12 11:32:22.102: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname1/: tls baz (200; 3.471327ms)
Jan 12 11:32:22.323: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/: foo (200; 23.853189ms)
Jan 12 11:32:22.504: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/: bar (200; 4.13924ms)
Jan 12 11:32:22.741: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 40.572161ms)
Jan 12 11:32:22.905: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname1/: foo (200; 3.939467ms)
Jan 12 11:32:23.105: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname1/: tls baz (200; 3.132147ms)
Jan 12 11:32:23.370: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/: foo (200; 66.650598ms)
Jan 12 11:32:23.508: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/: bar (200; 4.392734ms)
Jan 12 11:32:23.708: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 3.812083ms)
Jan 12 11:32:23.936: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 31.152922ms)
Jan 12 11:32:24.111: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:460/proxy/: tls baz (200; 5.175996ms)
Jan 12 11:32:24.310: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname2/: bar (200; 3.22453ms)
Jan 12 11:32:24.522: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname2/: tls qux (200; 15.35654ms)
Jan 12 11:32:24.711: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/re... (200; 3.338242ms)
Jan 12 11:32:24.912: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/rewrite... (200; 4.167939ms)
Jan 12 11:32:25.130: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/re... (200; 21.622606ms)
Jan 12 11:32:25.313: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname1/: foo (200; 3.724955ms)
Jan 12 11:32:25.513: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/: bar (200; 3.385184ms)
Jan 12 11:32:25.728: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/: foo (200; 17.366884ms)
Jan 12 11:32:25.919: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 4.600967ms)
Jan 12 11:32:26.118: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 3.576453ms)
Jan 12 11:32:26.318: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:462/proxy/: tls qux (200; 3.953276ms)
Jan 12 11:32:26.518: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname2/: bar (200; 3.104785ms)
Jan 12 11:32:26.720: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/rewrite... (200; 4.667321ms)
Jan 12 11:32:26.949: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/rewriteme"... (200; 32.574947ms)
Jan 12 11:32:27.121: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/... (200; 4.147717ms)
Jan 12 11:32:27.321: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 3.717053ms)
Jan 12 11:32:27.529: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/proxy/: bar (200; 11.197866ms)
Jan 12 11:32:27.720: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:462/proxy/: tls qux (200; 2.123102ms)
Jan 12 11:32:27.921: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname1/: foo (200; 3.250673ms)
Jan 12 11:32:28.122: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:162/: bar (200; 3.486053ms)
Jan 12 11:32:28.323: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/: foo (200; 3.767155ms)
Jan 12 11:32:28.561: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:443/proxy/... (200; 41.047148ms)
Jan 12 11:32:28.723: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/http:proxy-service-1ec0u:portname2/: bar (200; 2.656693ms)
Jan 12 11:32:28.925: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/rewrite... (200; 4.474685ms)
Jan 12 11:32:29.157: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g/proxy/rewriteme"... (200; 35.586121ms)
Jan 12 11:32:29.326: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:162/: bar (200; 3.639438ms)
Jan 12 11:32:29.527: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 4.01958ms)
Jan 12 11:32:29.727: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:160/proxy/: foo (200; 4.058914ms)
Jan 12 11:32:29.927: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/https:proxy-service-1ec0u-iyk1g:460/proxy/: tls baz (200; 3.649774ms)
Jan 12 11:32:30.174: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname1/: foo (200; 50.40536ms)
Jan 12 11:32:30.337: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname1/: tls baz (200; 13.020852ms)
Jan 12 11:32:30.528: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:160/: foo (200; 3.981694ms)
Jan 12 11:32:30.728: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/proxy-service-1ec0u-iyk1g:80/proxy/rewrite... (200; 3.759719ms)
Jan 12 11:32:30.929: INFO: /api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/: <a href="/api/v1/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/proxy/re... (200; 3.498393ms)
Jan 12 11:32:31.129: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/proxy-service-1ec0u:portname2/: bar (200; 3.001403ms)
Jan 12 11:32:31.345: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/services/https:proxy-service-1ec0u:tlsportname2/: tls qux (200; 19.176055ms)
Jan 12 11:32:31.530: INFO: /api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/: <a href="/api/v1/proxy/namespaces/e2e-tests-proxy-lmlff/pods/http:proxy-service-1ec0u-iyk1g:80/re... (200; 3.602768ms)
STEP: deleting replication controller proxy-service-1ec0u in namespace e2e-tests-proxy-lmlff
Jan 12 11:32:33.805: INFO: Deleting RC proxy-service-1ec0u took: 2.062132203s
Jan 12 11:32:41.815: INFO: Terminating RC proxy-service-1ec0u pods took: 8.01007207s
[AfterEach] version v1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:52
Jan 12 11:32:41.925: INFO: Waiting up to 1m0s for all nodes to be ready
Jan 12 11:32:41.934: INFO: Node 172.24.114.31 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 13:57:05 -0800 PST
Jan 12 11:32:41.934: INFO: Node 172.24.114.31 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 10:32:21 -0800 PST
Jan 12 11:32:41.934: INFO: Successfully found node 172.24.114.31 readiness to be true
Jan 12 11:32:41.934: INFO: Node 172.24.114.32 condition 1/2: type: OutOfDisk, status: False, reason: "KubeletHasSufficientDisk", message: "kubelet has sufficient disk space available", last transition time: 2016-01-11 14:16:06 -0800 PST
Jan 12 11:32:41.934: INFO: Node 172.24.114.32 condition 2/2: type: Ready, status: True, reason: "KubeletReady", message: "kubelet is posting ready status", last transition time: 2016-01-12 09:20:46 -0800 PST
Jan 12 11:32:41.934: INFO: Successfully found node 172.24.114.32 readiness to be true
STEP: Destroying namespace "e2e-tests-proxy-lmlff" for this suite.
• [SLOW TEST:107.646 seconds]
Proxy
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:41
version v1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:40
should proxy through a service and a pod [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:218
------------------------------
S
Ran 72 of 175 Specs in 1653.402 seconds
SUCCESS! -- 72 Passed | 0 Failed | 2 Pending | 101 Skipped PASS
Ginkgo ran 1 suite in 27m33.795463294s
Test Suite Passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment